00:00:00.000 Started by upstream project "autotest-per-patch" build number 132304 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.020 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.022 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.043 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.096 > git --version # 'git version 2.39.2' 00:00:00.096 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.115 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.115 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.263 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.276 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.286 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.286 > git config core.sparsecheckout # timeout=10 00:00:02.299 > git read-tree -mu HEAD # timeout=10 00:00:02.316 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.335 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.335 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.555 [Pipeline] Start of Pipeline 00:00:02.571 [Pipeline] library 00:00:02.572 Loading library shm_lib@master 00:00:02.572 Library shm_lib@master is cached. Copying from home. 00:00:02.587 [Pipeline] node 00:00:02.597 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:02.598 [Pipeline] { 00:00:02.608 [Pipeline] catchError 00:00:02.610 [Pipeline] { 00:00:02.623 [Pipeline] wrap 00:00:02.630 [Pipeline] { 00:00:02.636 [Pipeline] stage 00:00:02.637 [Pipeline] { (Prologue) 00:00:02.653 [Pipeline] echo 00:00:02.654 Node: VM-host-SM9 00:00:02.661 [Pipeline] cleanWs 00:00:02.670 [WS-CLEANUP] Deleting project workspace... 00:00:02.670 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.676 [WS-CLEANUP] done 00:00:02.906 [Pipeline] setCustomBuildProperty 00:00:02.969 [Pipeline] httpRequest 00:00:03.658 [Pipeline] echo 00:00:03.660 Sorcerer 10.211.164.101 is alive 00:00:03.670 [Pipeline] retry 00:00:03.672 [Pipeline] { 00:00:03.687 [Pipeline] httpRequest 00:00:03.691 HttpMethod: GET 00:00:03.692 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.692 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.694 Response Code: HTTP/1.1 200 OK 00:00:03.694 Success: Status code 200 is in the accepted range: 200,404 00:00:03.695 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.840 [Pipeline] } 00:00:03.857 [Pipeline] // retry 00:00:03.865 [Pipeline] sh 00:00:04.145 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.160 [Pipeline] httpRequest 00:00:05.453 [Pipeline] echo 00:00:05.454 Sorcerer 10.211.164.101 is alive 00:00:05.462 [Pipeline] retry 00:00:05.464 [Pipeline] { 00:00:05.475 [Pipeline] httpRequest 00:00:05.479 HttpMethod: GET 00:00:05.479 URL: http://10.211.164.101/packages/spdk_d2671b4b71ac60242ad9560556dd6e829f69f853.tar.gz 00:00:05.480 Sending request to url: http://10.211.164.101/packages/spdk_d2671b4b71ac60242ad9560556dd6e829f69f853.tar.gz 00:00:05.481 Response Code: HTTP/1.1 200 OK 00:00:05.481 Success: Status code 200 is in the accepted range: 200,404 00:00:05.482 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_d2671b4b71ac60242ad9560556dd6e829f69f853.tar.gz 00:00:27.000 [Pipeline] } 00:00:27.019 [Pipeline] // retry 00:00:27.028 [Pipeline] sh 00:00:27.310 + tar --no-same-owner -xf spdk_d2671b4b71ac60242ad9560556dd6e829f69f853.tar.gz 00:00:29.855 [Pipeline] sh 00:00:30.178 + git -C spdk log --oneline -n5 00:00:30.178 d2671b4b7 test/nvme: warning instead of failing the test 00:00:30.179 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:30.179 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:30.179 4bcab9fb9 correct kick for CQ full case 00:00:30.179 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:30.237 [Pipeline] writeFile 00:00:30.253 [Pipeline] sh 00:00:30.535 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.548 [Pipeline] sh 00:00:30.972 + cat autorun-spdk.conf 00:00:30.972 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.972 SPDK_TEST_NVMF=1 00:00:30.972 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.972 SPDK_TEST_URING=1 00:00:30.972 SPDK_TEST_USDT=1 00:00:30.972 SPDK_RUN_UBSAN=1 00:00:30.972 NET_TYPE=virt 00:00:30.972 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.980 RUN_NIGHTLY=0 00:00:30.981 [Pipeline] } 00:00:30.996 [Pipeline] // stage 00:00:31.012 [Pipeline] stage 00:00:31.014 [Pipeline] { (Run VM) 00:00:31.030 [Pipeline] sh 00:00:31.310 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.310 + echo 'Start stage prepare_nvme.sh' 00:00:31.310 Start stage prepare_nvme.sh 00:00:31.310 + [[ -n 3 ]] 00:00:31.310 + disk_prefix=ex3 00:00:31.310 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:00:31.310 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:00:31.310 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:00:31.310 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.310 ++ SPDK_TEST_NVMF=1 00:00:31.310 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.310 ++ SPDK_TEST_URING=1 00:00:31.310 ++ SPDK_TEST_USDT=1 00:00:31.310 ++ SPDK_RUN_UBSAN=1 00:00:31.310 ++ NET_TYPE=virt 00:00:31.310 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.310 ++ RUN_NIGHTLY=0 00:00:31.310 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:31.310 + nvme_files=() 00:00:31.310 + declare -A nvme_files 00:00:31.310 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.310 + nvme_files['nvme.img']=5G 00:00:31.310 + nvme_files['nvme-cmb.img']=5G 00:00:31.310 + nvme_files['nvme-multi0.img']=4G 00:00:31.310 + nvme_files['nvme-multi1.img']=4G 00:00:31.310 + nvme_files['nvme-multi2.img']=4G 00:00:31.310 + nvme_files['nvme-openstack.img']=8G 00:00:31.310 + nvme_files['nvme-zns.img']=5G 00:00:31.310 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.310 + (( SPDK_TEST_FTL == 1 )) 00:00:31.310 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.310 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.310 + for nvme in "${!nvme_files[@]}" 00:00:31.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:31.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.310 + for nvme in "${!nvme_files[@]}" 00:00:31.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:31.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.310 + for nvme in "${!nvme_files[@]}" 00:00:31.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:31.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.310 + for nvme in "${!nvme_files[@]}" 00:00:31.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:31.569 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.569 + for nvme in "${!nvme_files[@]}" 00:00:31.569 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:31.569 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.569 + for nvme in "${!nvme_files[@]}" 00:00:31.569 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:31.569 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.569 + for nvme in "${!nvme_files[@]}" 00:00:31.569 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:31.828 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.828 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:31.828 + echo 'End stage prepare_nvme.sh' 00:00:31.828 End stage prepare_nvme.sh 00:00:31.839 [Pipeline] sh 00:00:32.119 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.119 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:32.119 00:00:32.119 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:00:32.119 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:00:32.119 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:32.119 HELP=0 00:00:32.119 DRY_RUN=0 00:00:32.119 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:32.119 NVME_DISKS_TYPE=nvme,nvme, 00:00:32.119 NVME_AUTO_CREATE=0 00:00:32.119 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:32.119 NVME_CMB=,, 00:00:32.119 NVME_PMR=,, 00:00:32.119 NVME_ZNS=,, 00:00:32.119 NVME_MS=,, 00:00:32.119 NVME_FDP=,, 00:00:32.119 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.119 SPDK_VAGRANT_VMCPU=10 00:00:32.119 SPDK_VAGRANT_VMRAM=12288 00:00:32.119 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.119 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:32.119 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.119 SPDK_OPENSTACK_NETWORK=0 00:00:32.119 VAGRANT_PACKAGE_BOX=0 00:00:32.119 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:32.119 FORCE_DISTRO=true 00:00:32.119 VAGRANT_BOX_VERSION= 00:00:32.119 EXTRA_VAGRANTFILES= 00:00:32.119 NIC_MODEL=e1000 00:00:32.119 00:00:32.119 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:00:32.119 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:35.406 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.406 ==> default: Creating image (snapshot of base box volume). 00:00:35.664 ==> default: Creating domain with the following settings... 00:00:35.664 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731674143_5fe203e516dcc3ae2a1f 00:00:35.664 ==> default: -- Domain type: kvm 00:00:35.664 ==> default: -- Cpus: 10 00:00:35.665 ==> default: -- Feature: acpi 00:00:35.665 ==> default: -- Feature: apic 00:00:35.665 ==> default: -- Feature: pae 00:00:35.665 ==> default: -- Memory: 12288M 00:00:35.665 ==> default: -- Memory Backing: hugepages: 00:00:35.665 ==> default: -- Management MAC: 00:00:35.665 ==> default: -- Loader: 00:00:35.665 ==> default: -- Nvram: 00:00:35.665 ==> default: -- Base box: spdk/fedora39 00:00:35.665 ==> default: -- Storage pool: default 00:00:35.665 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731674143_5fe203e516dcc3ae2a1f.img (20G) 00:00:35.665 ==> default: -- Volume Cache: default 00:00:35.665 ==> default: -- Kernel: 00:00:35.665 ==> default: -- Initrd: 00:00:35.665 ==> default: -- Graphics Type: vnc 00:00:35.665 ==> default: -- Graphics Port: -1 00:00:35.665 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.665 ==> default: -- Graphics Password: Not defined 00:00:35.665 ==> default: -- Video Type: cirrus 00:00:35.665 ==> default: -- Video VRAM: 9216 00:00:35.665 ==> default: -- Sound Type: 00:00:35.665 ==> default: -- Keymap: en-us 00:00:35.665 ==> default: -- TPM Path: 00:00:35.665 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.665 ==> default: -- Command line args: 00:00:35.665 ==> default: -> value=-device, 00:00:35.665 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.665 ==> default: -> value=-drive, 00:00:35.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.665 ==> default: -> value=-device, 00:00:35.665 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.665 ==> default: -> value=-device, 00:00:35.665 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.665 ==> default: -> value=-drive, 00:00:35.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.665 ==> default: -> value=-device, 00:00:35.665 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.665 ==> default: -> value=-drive, 00:00:35.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.665 ==> default: -> value=-device, 00:00:35.665 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.665 ==> default: -> value=-drive, 00:00:35.665 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.665 ==> default: -> value=-device, 00:00:35.665 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.665 ==> default: Creating shared folders metadata... 00:00:35.665 ==> default: Starting domain. 00:00:37.038 ==> default: Waiting for domain to get an IP address... 00:00:55.118 ==> default: Waiting for SSH to become available... 00:00:56.053 ==> default: Configuring and enabling network interfaces... 00:01:00.245 default: SSH address: 192.168.121.226:22 00:01:00.245 default: SSH username: vagrant 00:01:00.245 default: SSH auth method: private key 00:01:02.780 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.974 ==> default: Mounting SSHFS shared folder... 00:01:11.541 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.541 ==> default: Checking Mount.. 00:01:12.918 ==> default: Folder Successfully Mounted! 00:01:12.918 ==> default: Running provisioner: file... 00:01:13.483 default: ~/.gitconfig => .gitconfig 00:01:14.048 00:01:14.049 SUCCESS! 00:01:14.049 00:01:14.049 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.049 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.049 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:14.049 00:01:14.057 [Pipeline] } 00:01:14.072 [Pipeline] // stage 00:01:14.082 [Pipeline] dir 00:01:14.083 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:01:14.085 [Pipeline] { 00:01:14.098 [Pipeline] catchError 00:01:14.100 [Pipeline] { 00:01:14.114 [Pipeline] sh 00:01:14.394 + vagrant ssh-config --host vagrant 00:01:14.394 + sed -ne /^Host/,$p 00:01:14.394 + tee ssh_conf 00:01:17.679 Host vagrant 00:01:17.679 HostName 192.168.121.226 00:01:17.679 User vagrant 00:01:17.679 Port 22 00:01:17.679 UserKnownHostsFile /dev/null 00:01:17.679 StrictHostKeyChecking no 00:01:17.679 PasswordAuthentication no 00:01:17.679 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.679 IdentitiesOnly yes 00:01:17.679 LogLevel FATAL 00:01:17.679 ForwardAgent yes 00:01:17.679 ForwardX11 yes 00:01:17.679 00:01:17.692 [Pipeline] withEnv 00:01:17.694 [Pipeline] { 00:01:17.707 [Pipeline] sh 00:01:17.986 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.987 source /etc/os-release 00:01:17.987 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.987 # Minimal, systemd-like check. 00:01:17.987 if [[ -e /.dockerenv ]]; then 00:01:17.987 # Clear garbage from the node's name: 00:01:17.987 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.987 # $HOSTNAME is the actual container id 00:01:17.987 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.987 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.987 # We can assume this is a mount from a host where container is running, 00:01:17.987 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.987 container="$(< /etc/hostname) ($agent)" 00:01:17.987 else 00:01:17.987 # Fallback 00:01:17.987 container=$agent 00:01:17.987 fi 00:01:17.987 fi 00:01:17.987 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.987 00:01:18.256 [Pipeline] } 00:01:18.272 [Pipeline] // withEnv 00:01:18.280 [Pipeline] setCustomBuildProperty 00:01:18.294 [Pipeline] stage 00:01:18.296 [Pipeline] { (Tests) 00:01:18.312 [Pipeline] sh 00:01:18.590 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.862 [Pipeline] sh 00:01:19.141 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.412 [Pipeline] timeout 00:01:19.412 Timeout set to expire in 1 hr 0 min 00:01:19.414 [Pipeline] { 00:01:19.428 [Pipeline] sh 00:01:19.705 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:20.271 HEAD is now at d2671b4b7 test/nvme: warning instead of failing the test 00:01:20.282 [Pipeline] sh 00:01:20.572 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.845 [Pipeline] sh 00:01:21.125 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.399 [Pipeline] sh 00:01:21.679 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:21.937 ++ readlink -f spdk_repo 00:01:21.937 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.937 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.937 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.937 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.937 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.938 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.938 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.938 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:21.938 + cd /home/vagrant/spdk_repo 00:01:21.938 + source /etc/os-release 00:01:21.938 ++ NAME='Fedora Linux' 00:01:21.938 ++ VERSION='39 (Cloud Edition)' 00:01:21.938 ++ ID=fedora 00:01:21.938 ++ VERSION_ID=39 00:01:21.938 ++ VERSION_CODENAME= 00:01:21.938 ++ PLATFORM_ID=platform:f39 00:01:21.938 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.938 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.938 ++ LOGO=fedora-logo-icon 00:01:21.938 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.938 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.938 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.938 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.938 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.938 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.938 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.938 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.938 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.938 ++ SUPPORT_END=2024-11-12 00:01:21.938 ++ VARIANT='Cloud Edition' 00:01:21.938 ++ VARIANT_ID=cloud 00:01:21.938 + uname -a 00:01:21.938 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.938 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.196 Hugepages 00:01:22.196 node hugesize free / total 00:01:22.196 node0 1048576kB 0 / 0 00:01:22.196 node0 2048kB 0 / 0 00:01:22.196 00:01:22.196 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.455 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.455 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:22.455 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:22.455 + rm -f /tmp/spdk-ld-path 00:01:22.455 + source autorun-spdk.conf 00:01:22.455 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.455 ++ SPDK_TEST_NVMF=1 00:01:22.455 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.455 ++ SPDK_TEST_URING=1 00:01:22.455 ++ SPDK_TEST_USDT=1 00:01:22.455 ++ SPDK_RUN_UBSAN=1 00:01:22.455 ++ NET_TYPE=virt 00:01:22.455 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.455 ++ RUN_NIGHTLY=0 00:01:22.455 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.455 + [[ -n '' ]] 00:01:22.455 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.455 + for M in /var/spdk/build-*-manifest.txt 00:01:22.455 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.455 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.455 + for M in /var/spdk/build-*-manifest.txt 00:01:22.455 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.455 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.455 + for M in /var/spdk/build-*-manifest.txt 00:01:22.455 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.455 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.455 ++ uname 00:01:22.455 + [[ Linux == \L\i\n\u\x ]] 00:01:22.455 + sudo dmesg -T 00:01:22.455 + sudo dmesg --clear 00:01:22.455 + dmesg_pid=5274 00:01:22.455 + [[ Fedora Linux == FreeBSD ]] 00:01:22.455 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.455 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.455 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.455 + sudo dmesg -Tw 00:01:22.455 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.455 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.455 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.455 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.455 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.455 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.455 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.455 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.455 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.455 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.455 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.455 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.714 12:36:31 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.714 12:36:31 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.714 12:36:31 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:22.714 12:36:31 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:22.714 12:36:31 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.714 12:36:31 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.714 12:36:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.714 12:36:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.714 12:36:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.714 12:36:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.714 12:36:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.714 12:36:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.714 12:36:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.714 12:36:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.715 12:36:31 -- paths/export.sh@5 -- $ export PATH 00:01:22.715 12:36:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.715 12:36:31 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.715 12:36:31 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:22.715 12:36:31 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731674191.XXXXXX 00:01:22.715 12:36:31 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731674191.euEViJ 00:01:22.715 12:36:31 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:22.715 12:36:31 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:22.715 12:36:31 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.715 12:36:31 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.715 12:36:31 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.715 12:36:31 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:22.715 12:36:31 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:22.715 12:36:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.715 12:36:31 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:22.715 12:36:31 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:22.715 12:36:31 -- pm/common@17 -- $ local monitor 00:01:22.715 12:36:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.715 12:36:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.715 12:36:31 -- pm/common@25 -- $ sleep 1 00:01:22.715 12:36:31 -- pm/common@21 -- $ date +%s 00:01:22.715 12:36:31 -- pm/common@21 -- $ date +%s 00:01:22.715 12:36:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731674191 00:01:22.715 12:36:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731674191 00:01:22.715 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731674191_collect-cpu-load.pm.log 00:01:22.715 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731674191_collect-vmstat.pm.log 00:01:23.651 12:36:32 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:23.651 12:36:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.651 12:36:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.651 12:36:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:23.651 12:36:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.651 Fri Nov 15 12:36:32 PM UTC 2024 00:01:23.651 12:36:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.651 v25.01-pre-190-gd2671b4b7 00:01:23.651 12:36:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.651 12:36:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.651 12:36:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.651 12:36:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.651 12:36:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.651 12:36:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.651 ************************************ 00:01:23.651 START TEST ubsan 00:01:23.651 ************************************ 00:01:23.651 using ubsan 00:01:23.651 12:36:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:23.651 00:01:23.651 real 0m0.000s 00:01:23.651 user 0m0.000s 00:01:23.651 sys 0m0.000s 00:01:23.651 12:36:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.651 ************************************ 00:01:23.651 END TEST ubsan 00:01:23.651 12:36:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.651 ************************************ 00:01:23.651 12:36:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.651 12:36:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.651 12:36:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.651 12:36:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.651 12:36:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.651 12:36:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.651 12:36:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.651 12:36:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.651 12:36:32 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:23.909 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:23.909 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.475 Using 'verbs' RDMA provider 00:01:37.702 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:52.577 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:52.577 Creating mk/config.mk...done. 00:01:52.577 Creating mk/cc.flags.mk...done. 00:01:52.577 Type 'make' to build. 00:01:52.577 12:36:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:52.577 12:36:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.577 12:36:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.577 12:36:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.577 ************************************ 00:01:52.577 START TEST make 00:01:52.577 ************************************ 00:01:52.577 12:36:59 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:52.577 make[1]: Nothing to be done for 'all'. 00:02:04.781 The Meson build system 00:02:04.781 Version: 1.5.0 00:02:04.781 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:04.781 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:04.781 Build type: native build 00:02:04.781 Program cat found: YES (/usr/bin/cat) 00:02:04.781 Project name: DPDK 00:02:04.781 Project version: 24.03.0 00:02:04.781 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.781 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.781 Host machine cpu family: x86_64 00:02:04.781 Host machine cpu: x86_64 00:02:04.782 Message: ## Building in Developer Mode ## 00:02:04.782 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.782 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.782 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.782 Program python3 found: YES (/usr/bin/python3) 00:02:04.782 Program cat found: YES (/usr/bin/cat) 00:02:04.782 Compiler for C supports arguments -march=native: YES 00:02:04.782 Checking for size of "void *" : 8 00:02:04.782 Checking for size of "void *" : 8 (cached) 00:02:04.782 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:04.782 Library m found: YES 00:02:04.782 Library numa found: YES 00:02:04.782 Has header "numaif.h" : YES 00:02:04.782 Library fdt found: NO 00:02:04.782 Library execinfo found: NO 00:02:04.782 Has header "execinfo.h" : YES 00:02:04.782 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.782 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.782 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.782 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.782 Run-time dependency openssl found: YES 3.1.1 00:02:04.782 Run-time dependency libpcap found: YES 1.10.4 00:02:04.782 Has header "pcap.h" with dependency libpcap: YES 00:02:04.782 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.782 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.782 Compiler for C supports arguments -Wformat: YES 00:02:04.782 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.782 Compiler for C supports arguments -Wformat-security: NO 00:02:04.782 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.782 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.782 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.782 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.782 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.782 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.782 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.782 Compiler for C supports arguments -Wundef: YES 00:02:04.782 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.782 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.782 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.782 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.782 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.782 Program objdump found: YES (/usr/bin/objdump) 00:02:04.782 Compiler for C supports arguments -mavx512f: YES 00:02:04.782 Checking if "AVX512 checking" compiles: YES 00:02:04.782 Fetching value of define "__SSE4_2__" : 1 00:02:04.782 Fetching value of define "__AES__" : 1 00:02:04.782 Fetching value of define "__AVX__" : 1 00:02:04.782 Fetching value of define "__AVX2__" : 1 00:02:04.782 Fetching value of define "__AVX512BW__" : (undefined) 00:02:04.782 Fetching value of define "__AVX512CD__" : (undefined) 00:02:04.782 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:04.782 Fetching value of define "__AVX512F__" : (undefined) 00:02:04.782 Fetching value of define "__AVX512VL__" : (undefined) 00:02:04.782 Fetching value of define "__PCLMUL__" : 1 00:02:04.782 Fetching value of define "__RDRND__" : 1 00:02:04.782 Fetching value of define "__RDSEED__" : 1 00:02:04.782 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.782 Fetching value of define "__znver1__" : (undefined) 00:02:04.782 Fetching value of define "__znver2__" : (undefined) 00:02:04.782 Fetching value of define "__znver3__" : (undefined) 00:02:04.782 Fetching value of define "__znver4__" : (undefined) 00:02:04.782 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.782 Message: lib/log: Defining dependency "log" 00:02:04.782 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.782 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.782 Checking for function "getentropy" : NO 00:02:04.782 Message: lib/eal: Defining dependency "eal" 00:02:04.782 Message: lib/ring: Defining dependency "ring" 00:02:04.782 Message: lib/rcu: Defining dependency "rcu" 00:02:04.782 Message: lib/mempool: Defining dependency "mempool" 00:02:04.782 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.782 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.782 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:04.782 Compiler for C supports arguments -mpclmul: YES 00:02:04.782 Compiler for C supports arguments -maes: YES 00:02:04.782 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.782 Compiler for C supports arguments -mavx512bw: YES 00:02:04.782 Compiler for C supports arguments -mavx512dq: YES 00:02:04.782 Compiler for C supports arguments -mavx512vl: YES 00:02:04.782 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.782 Compiler for C supports arguments -mavx2: YES 00:02:04.782 Compiler for C supports arguments -mavx: YES 00:02:04.782 Message: lib/net: Defining dependency "net" 00:02:04.782 Message: lib/meter: Defining dependency "meter" 00:02:04.782 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.782 Message: lib/pci: Defining dependency "pci" 00:02:04.782 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.782 Message: lib/hash: Defining dependency "hash" 00:02:04.782 Message: lib/timer: Defining dependency "timer" 00:02:04.782 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.782 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.782 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.782 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.782 Message: lib/power: Defining dependency "power" 00:02:04.782 Message: lib/reorder: Defining dependency "reorder" 00:02:04.782 Message: lib/security: Defining dependency "security" 00:02:04.782 Has header "linux/userfaultfd.h" : YES 00:02:04.782 Has header "linux/vduse.h" : YES 00:02:04.782 Message: lib/vhost: Defining dependency "vhost" 00:02:04.782 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.782 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.782 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.782 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.782 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.782 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.782 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.782 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.782 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.782 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.782 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:04.782 Configuring doxy-api-html.conf using configuration 00:02:04.782 Configuring doxy-api-man.conf using configuration 00:02:04.782 Program mandb found: YES (/usr/bin/mandb) 00:02:04.782 Program sphinx-build found: NO 00:02:04.782 Configuring rte_build_config.h using configuration 00:02:04.782 Message: 00:02:04.782 ================= 00:02:04.782 Applications Enabled 00:02:04.782 ================= 00:02:04.782 00:02:04.782 apps: 00:02:04.782 00:02:04.782 00:02:04.782 Message: 00:02:04.782 ================= 00:02:04.782 Libraries Enabled 00:02:04.782 ================= 00:02:04.782 00:02:04.782 libs: 00:02:04.782 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.782 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.782 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.782 00:02:04.782 Message: 00:02:04.782 =============== 00:02:04.782 Drivers Enabled 00:02:04.782 =============== 00:02:04.782 00:02:04.782 common: 00:02:04.782 00:02:04.782 bus: 00:02:04.782 pci, vdev, 00:02:04.782 mempool: 00:02:04.782 ring, 00:02:04.782 dma: 00:02:04.782 00:02:04.782 net: 00:02:04.782 00:02:04.782 crypto: 00:02:04.782 00:02:04.782 compress: 00:02:04.782 00:02:04.782 vdpa: 00:02:04.782 00:02:04.782 00:02:04.782 Message: 00:02:04.782 ================= 00:02:04.782 Content Skipped 00:02:04.782 ================= 00:02:04.782 00:02:04.782 apps: 00:02:04.782 dumpcap: explicitly disabled via build config 00:02:04.782 graph: explicitly disabled via build config 00:02:04.782 pdump: explicitly disabled via build config 00:02:04.782 proc-info: explicitly disabled via build config 00:02:04.782 test-acl: explicitly disabled via build config 00:02:04.782 test-bbdev: explicitly disabled via build config 00:02:04.782 test-cmdline: explicitly disabled via build config 00:02:04.782 test-compress-perf: explicitly disabled via build config 00:02:04.782 test-crypto-perf: explicitly disabled via build config 00:02:04.782 test-dma-perf: explicitly disabled via build config 00:02:04.782 test-eventdev: explicitly disabled via build config 00:02:04.782 test-fib: explicitly disabled via build config 00:02:04.782 test-flow-perf: explicitly disabled via build config 00:02:04.782 test-gpudev: explicitly disabled via build config 00:02:04.782 test-mldev: explicitly disabled via build config 00:02:04.782 test-pipeline: explicitly disabled via build config 00:02:04.782 test-pmd: explicitly disabled via build config 00:02:04.782 test-regex: explicitly disabled via build config 00:02:04.782 test-sad: explicitly disabled via build config 00:02:04.782 test-security-perf: explicitly disabled via build config 00:02:04.782 00:02:04.782 libs: 00:02:04.782 argparse: explicitly disabled via build config 00:02:04.782 metrics: explicitly disabled via build config 00:02:04.782 acl: explicitly disabled via build config 00:02:04.782 bbdev: explicitly disabled via build config 00:02:04.782 bitratestats: explicitly disabled via build config 00:02:04.782 bpf: explicitly disabled via build config 00:02:04.782 cfgfile: explicitly disabled via build config 00:02:04.782 distributor: explicitly disabled via build config 00:02:04.783 efd: explicitly disabled via build config 00:02:04.783 eventdev: explicitly disabled via build config 00:02:04.783 dispatcher: explicitly disabled via build config 00:02:04.783 gpudev: explicitly disabled via build config 00:02:04.783 gro: explicitly disabled via build config 00:02:04.783 gso: explicitly disabled via build config 00:02:04.783 ip_frag: explicitly disabled via build config 00:02:04.783 jobstats: explicitly disabled via build config 00:02:04.783 latencystats: explicitly disabled via build config 00:02:04.783 lpm: explicitly disabled via build config 00:02:04.783 member: explicitly disabled via build config 00:02:04.783 pcapng: explicitly disabled via build config 00:02:04.783 rawdev: explicitly disabled via build config 00:02:04.783 regexdev: explicitly disabled via build config 00:02:04.783 mldev: explicitly disabled via build config 00:02:04.783 rib: explicitly disabled via build config 00:02:04.783 sched: explicitly disabled via build config 00:02:04.783 stack: explicitly disabled via build config 00:02:04.783 ipsec: explicitly disabled via build config 00:02:04.783 pdcp: explicitly disabled via build config 00:02:04.783 fib: explicitly disabled via build config 00:02:04.783 port: explicitly disabled via build config 00:02:04.783 pdump: explicitly disabled via build config 00:02:04.783 table: explicitly disabled via build config 00:02:04.783 pipeline: explicitly disabled via build config 00:02:04.783 graph: explicitly disabled via build config 00:02:04.783 node: explicitly disabled via build config 00:02:04.783 00:02:04.783 drivers: 00:02:04.783 common/cpt: not in enabled drivers build config 00:02:04.783 common/dpaax: not in enabled drivers build config 00:02:04.783 common/iavf: not in enabled drivers build config 00:02:04.783 common/idpf: not in enabled drivers build config 00:02:04.783 common/ionic: not in enabled drivers build config 00:02:04.783 common/mvep: not in enabled drivers build config 00:02:04.783 common/octeontx: not in enabled drivers build config 00:02:04.783 bus/auxiliary: not in enabled drivers build config 00:02:04.783 bus/cdx: not in enabled drivers build config 00:02:04.783 bus/dpaa: not in enabled drivers build config 00:02:04.783 bus/fslmc: not in enabled drivers build config 00:02:04.783 bus/ifpga: not in enabled drivers build config 00:02:04.783 bus/platform: not in enabled drivers build config 00:02:04.783 bus/uacce: not in enabled drivers build config 00:02:04.783 bus/vmbus: not in enabled drivers build config 00:02:04.783 common/cnxk: not in enabled drivers build config 00:02:04.783 common/mlx5: not in enabled drivers build config 00:02:04.783 common/nfp: not in enabled drivers build config 00:02:04.783 common/nitrox: not in enabled drivers build config 00:02:04.783 common/qat: not in enabled drivers build config 00:02:04.783 common/sfc_efx: not in enabled drivers build config 00:02:04.783 mempool/bucket: not in enabled drivers build config 00:02:04.783 mempool/cnxk: not in enabled drivers build config 00:02:04.783 mempool/dpaa: not in enabled drivers build config 00:02:04.783 mempool/dpaa2: not in enabled drivers build config 00:02:04.783 mempool/octeontx: not in enabled drivers build config 00:02:04.783 mempool/stack: not in enabled drivers build config 00:02:04.783 dma/cnxk: not in enabled drivers build config 00:02:04.783 dma/dpaa: not in enabled drivers build config 00:02:04.783 dma/dpaa2: not in enabled drivers build config 00:02:04.783 dma/hisilicon: not in enabled drivers build config 00:02:04.783 dma/idxd: not in enabled drivers build config 00:02:04.783 dma/ioat: not in enabled drivers build config 00:02:04.783 dma/skeleton: not in enabled drivers build config 00:02:04.783 net/af_packet: not in enabled drivers build config 00:02:04.783 net/af_xdp: not in enabled drivers build config 00:02:04.783 net/ark: not in enabled drivers build config 00:02:04.783 net/atlantic: not in enabled drivers build config 00:02:04.783 net/avp: not in enabled drivers build config 00:02:04.783 net/axgbe: not in enabled drivers build config 00:02:04.783 net/bnx2x: not in enabled drivers build config 00:02:04.783 net/bnxt: not in enabled drivers build config 00:02:04.783 net/bonding: not in enabled drivers build config 00:02:04.783 net/cnxk: not in enabled drivers build config 00:02:04.783 net/cpfl: not in enabled drivers build config 00:02:04.783 net/cxgbe: not in enabled drivers build config 00:02:04.783 net/dpaa: not in enabled drivers build config 00:02:04.783 net/dpaa2: not in enabled drivers build config 00:02:04.783 net/e1000: not in enabled drivers build config 00:02:04.783 net/ena: not in enabled drivers build config 00:02:04.783 net/enetc: not in enabled drivers build config 00:02:04.783 net/enetfec: not in enabled drivers build config 00:02:04.783 net/enic: not in enabled drivers build config 00:02:04.783 net/failsafe: not in enabled drivers build config 00:02:04.783 net/fm10k: not in enabled drivers build config 00:02:04.783 net/gve: not in enabled drivers build config 00:02:04.783 net/hinic: not in enabled drivers build config 00:02:04.783 net/hns3: not in enabled drivers build config 00:02:04.783 net/i40e: not in enabled drivers build config 00:02:04.783 net/iavf: not in enabled drivers build config 00:02:04.783 net/ice: not in enabled drivers build config 00:02:04.783 net/idpf: not in enabled drivers build config 00:02:04.783 net/igc: not in enabled drivers build config 00:02:04.783 net/ionic: not in enabled drivers build config 00:02:04.783 net/ipn3ke: not in enabled drivers build config 00:02:04.783 net/ixgbe: not in enabled drivers build config 00:02:04.783 net/mana: not in enabled drivers build config 00:02:04.783 net/memif: not in enabled drivers build config 00:02:04.783 net/mlx4: not in enabled drivers build config 00:02:04.783 net/mlx5: not in enabled drivers build config 00:02:04.783 net/mvneta: not in enabled drivers build config 00:02:04.783 net/mvpp2: not in enabled drivers build config 00:02:04.783 net/netvsc: not in enabled drivers build config 00:02:04.783 net/nfb: not in enabled drivers build config 00:02:04.783 net/nfp: not in enabled drivers build config 00:02:04.783 net/ngbe: not in enabled drivers build config 00:02:04.783 net/null: not in enabled drivers build config 00:02:04.783 net/octeontx: not in enabled drivers build config 00:02:04.783 net/octeon_ep: not in enabled drivers build config 00:02:04.783 net/pcap: not in enabled drivers build config 00:02:04.783 net/pfe: not in enabled drivers build config 00:02:04.783 net/qede: not in enabled drivers build config 00:02:04.783 net/ring: not in enabled drivers build config 00:02:04.783 net/sfc: not in enabled drivers build config 00:02:04.783 net/softnic: not in enabled drivers build config 00:02:04.783 net/tap: not in enabled drivers build config 00:02:04.783 net/thunderx: not in enabled drivers build config 00:02:04.783 net/txgbe: not in enabled drivers build config 00:02:04.783 net/vdev_netvsc: not in enabled drivers build config 00:02:04.783 net/vhost: not in enabled drivers build config 00:02:04.783 net/virtio: not in enabled drivers build config 00:02:04.783 net/vmxnet3: not in enabled drivers build config 00:02:04.783 raw/*: missing internal dependency, "rawdev" 00:02:04.783 crypto/armv8: not in enabled drivers build config 00:02:04.783 crypto/bcmfs: not in enabled drivers build config 00:02:04.783 crypto/caam_jr: not in enabled drivers build config 00:02:04.783 crypto/ccp: not in enabled drivers build config 00:02:04.783 crypto/cnxk: not in enabled drivers build config 00:02:04.783 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.783 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.783 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.783 crypto/mlx5: not in enabled drivers build config 00:02:04.783 crypto/mvsam: not in enabled drivers build config 00:02:04.783 crypto/nitrox: not in enabled drivers build config 00:02:04.783 crypto/null: not in enabled drivers build config 00:02:04.783 crypto/octeontx: not in enabled drivers build config 00:02:04.783 crypto/openssl: not in enabled drivers build config 00:02:04.783 crypto/scheduler: not in enabled drivers build config 00:02:04.783 crypto/uadk: not in enabled drivers build config 00:02:04.783 crypto/virtio: not in enabled drivers build config 00:02:04.783 compress/isal: not in enabled drivers build config 00:02:04.783 compress/mlx5: not in enabled drivers build config 00:02:04.783 compress/nitrox: not in enabled drivers build config 00:02:04.783 compress/octeontx: not in enabled drivers build config 00:02:04.783 compress/zlib: not in enabled drivers build config 00:02:04.783 regex/*: missing internal dependency, "regexdev" 00:02:04.783 ml/*: missing internal dependency, "mldev" 00:02:04.783 vdpa/ifc: not in enabled drivers build config 00:02:04.783 vdpa/mlx5: not in enabled drivers build config 00:02:04.783 vdpa/nfp: not in enabled drivers build config 00:02:04.783 vdpa/sfc: not in enabled drivers build config 00:02:04.783 event/*: missing internal dependency, "eventdev" 00:02:04.783 baseband/*: missing internal dependency, "bbdev" 00:02:04.783 gpu/*: missing internal dependency, "gpudev" 00:02:04.783 00:02:04.783 00:02:04.783 Build targets in project: 85 00:02:04.783 00:02:04.783 DPDK 24.03.0 00:02:04.783 00:02:04.783 User defined options 00:02:04.783 buildtype : debug 00:02:04.783 default_library : shared 00:02:04.783 libdir : lib 00:02:04.783 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.783 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.783 c_link_args : 00:02:04.783 cpu_instruction_set: native 00:02:04.783 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:04.783 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:04.783 enable_docs : false 00:02:04.783 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:04.783 enable_kmods : false 00:02:04.783 max_lcores : 128 00:02:04.783 tests : false 00:02:04.783 00:02:04.783 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.783 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:04.783 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.783 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.783 [3/268] Linking static target lib/librte_kvargs.a 00:02:04.783 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.783 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.783 [6/268] Linking static target lib/librte_log.a 00:02:04.784 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.784 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.784 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.784 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.784 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.784 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.784 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.784 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.784 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.784 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.784 [17/268] Linking static target lib/librte_telemetry.a 00:02:05.042 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.042 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.042 [20/268] Linking target lib/librte_log.so.24.1 00:02:05.301 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.301 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:05.560 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.560 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.560 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.560 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.560 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.560 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.820 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.820 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.820 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:05.820 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.820 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.078 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.078 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.078 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.078 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.337 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.337 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.595 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.595 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.595 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.595 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.595 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.595 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.595 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.854 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.854 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.112 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.112 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.371 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.371 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.371 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.629 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.629 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.629 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.629 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.887 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.887 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.887 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.145 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.145 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.145 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.403 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.661 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.661 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.661 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.661 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.661 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.919 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.920 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.920 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.920 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.920 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:09.177 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:09.177 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.177 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:09.436 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.436 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.436 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.436 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:09.694 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.694 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.694 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.694 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.694 [86/268] Linking static target lib/librte_ring.a 00:02:09.952 [87/268] Linking static target lib/librte_eal.a 00:02:09.952 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:10.211 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:10.211 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:10.211 [91/268] Linking static target lib/librte_rcu.a 00:02:10.211 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.469 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:10.469 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.469 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:10.469 [96/268] Linking static target lib/librte_mempool.a 00:02:10.469 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.469 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.728 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.728 [100/268] Linking static target lib/librte_mbuf.a 00:02:10.728 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.728 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.728 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.986 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.986 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.986 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.986 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:11.245 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.245 [109/268] Linking static target lib/librte_net.a 00:02:11.503 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.503 [111/268] Linking static target lib/librte_meter.a 00:02:11.503 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.503 [113/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.761 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.761 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.761 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.761 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.761 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.761 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:12.328 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.586 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.586 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.845 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.845 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.845 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.845 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.845 [127/268] Linking static target lib/librte_pci.a 00:02:13.103 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.103 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.103 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.103 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.103 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.103 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.361 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.361 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.361 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.361 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.361 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.361 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.361 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.361 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:13.361 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:13.361 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.361 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.361 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:13.620 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.620 [147/268] Linking static target lib/librte_ethdev.a 00:02:13.878 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.136 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.136 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.136 [151/268] Linking static target lib/librte_timer.a 00:02:14.136 [152/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.136 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.136 [154/268] Linking static target lib/librte_cmdline.a 00:02:14.136 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.395 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.395 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.395 [158/268] Linking static target lib/librte_hash.a 00:02:14.653 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.653 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.653 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.911 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.911 [163/268] Linking static target lib/librte_compressdev.a 00:02:14.911 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.911 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.477 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.477 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.477 [168/268] Linking static target lib/librte_dmadev.a 00:02:15.477 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.477 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.477 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.477 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.735 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.735 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.735 [175/268] Linking static target lib/librte_cryptodev.a 00:02:15.735 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.735 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.994 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.251 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.251 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.251 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.251 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.251 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.251 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.818 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.818 [186/268] Linking static target lib/librte_power.a 00:02:16.818 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.076 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.076 [189/268] Linking static target lib/librte_security.a 00:02:17.076 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.076 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.335 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.335 [193/268] Linking static target lib/librte_reorder.a 00:02:17.335 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.594 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.852 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.852 [197/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.852 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.852 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:18.110 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:18.110 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.380 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.380 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.654 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.654 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.654 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.654 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.911 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.911 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.912 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.912 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.912 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:19.170 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:19.170 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:19.170 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.170 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.170 [217/268] Linking static target drivers/librte_bus_pci.a 00:02:19.170 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.170 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.170 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:19.170 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:19.429 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:19.429 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.429 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.429 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.429 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.429 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:19.687 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.253 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:20.512 [230/268] Linking static target lib/librte_vhost.a 00:02:21.079 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.079 [232/268] Linking target lib/librte_eal.so.24.1 00:02:21.079 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.079 [234/268] Linking target lib/librte_meter.so.24.1 00:02:21.079 [235/268] Linking target lib/librte_timer.so.24.1 00:02:21.079 [236/268] Linking target lib/librte_ring.so.24.1 00:02:21.079 [237/268] Linking target lib/librte_pci.so.24.1 00:02:21.079 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.079 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.337 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.337 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.337 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.337 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.337 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.337 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:21.337 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:21.337 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:21.337 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.596 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:21.596 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:21.596 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:21.596 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:21.596 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.596 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:21.596 [255/268] Linking target lib/librte_net.so.24.1 00:02:21.596 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:21.596 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:21.596 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:21.854 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:21.854 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:21.854 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:21.854 [262/268] Linking target lib/librte_hash.so.24.1 00:02:21.854 [263/268] Linking target lib/librte_security.so.24.1 00:02:21.854 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:22.112 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:22.112 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:22.112 [267/268] Linking target lib/librte_power.so.24.1 00:02:22.112 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:22.112 INFO: autodetecting backend as ninja 00:02:22.112 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:48.647 CC lib/ut/ut.o 00:02:48.647 CC lib/log/log_flags.o 00:02:48.647 CC lib/log/log.o 00:02:48.647 CC lib/log/log_deprecated.o 00:02:48.647 CC lib/ut_mock/mock.o 00:02:48.647 LIB libspdk_ut.a 00:02:48.647 LIB libspdk_ut_mock.a 00:02:48.647 SO libspdk_ut.so.2.0 00:02:48.647 LIB libspdk_log.a 00:02:48.647 SO libspdk_ut_mock.so.6.0 00:02:48.647 SO libspdk_log.so.7.1 00:02:48.647 SYMLINK libspdk_ut_mock.so 00:02:48.647 SYMLINK libspdk_ut.so 00:02:48.647 SYMLINK libspdk_log.so 00:02:48.647 CC lib/util/base64.o 00:02:48.647 CC lib/util/bit_array.o 00:02:48.647 CC lib/util/cpuset.o 00:02:48.647 CC lib/util/crc16.o 00:02:48.647 CC lib/util/crc32.o 00:02:48.647 CXX lib/trace_parser/trace.o 00:02:48.647 CC lib/ioat/ioat.o 00:02:48.647 CC lib/util/crc32c.o 00:02:48.647 CC lib/dma/dma.o 00:02:48.647 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.647 CC lib/util/crc32_ieee.o 00:02:48.647 CC lib/util/crc64.o 00:02:48.647 CC lib/vfio_user/host/vfio_user.o 00:02:48.647 CC lib/util/dif.o 00:02:48.647 LIB libspdk_dma.a 00:02:48.647 CC lib/util/fd.o 00:02:48.647 CC lib/util/fd_group.o 00:02:48.647 SO libspdk_dma.so.5.0 00:02:48.647 LIB libspdk_ioat.a 00:02:48.647 SYMLINK libspdk_dma.so 00:02:48.647 CC lib/util/file.o 00:02:48.647 SO libspdk_ioat.so.7.0 00:02:48.647 CC lib/util/hexlify.o 00:02:48.647 CC lib/util/iov.o 00:02:48.647 SYMLINK libspdk_ioat.so 00:02:48.647 LIB libspdk_vfio_user.a 00:02:48.647 CC lib/util/math.o 00:02:48.647 CC lib/util/net.o 00:02:48.647 CC lib/util/pipe.o 00:02:48.647 SO libspdk_vfio_user.so.5.0 00:02:48.647 CC lib/util/strerror_tls.o 00:02:48.647 SYMLINK libspdk_vfio_user.so 00:02:48.647 CC lib/util/string.o 00:02:48.647 CC lib/util/uuid.o 00:02:48.647 CC lib/util/xor.o 00:02:48.647 CC lib/util/zipf.o 00:02:48.647 CC lib/util/md5.o 00:02:48.647 LIB libspdk_util.a 00:02:48.647 SO libspdk_util.so.10.1 00:02:48.647 SYMLINK libspdk_util.so 00:02:48.647 LIB libspdk_trace_parser.a 00:02:48.647 SO libspdk_trace_parser.so.6.0 00:02:48.647 CC lib/conf/conf.o 00:02:48.647 CC lib/env_dpdk/env.o 00:02:48.647 CC lib/env_dpdk/memory.o 00:02:48.647 CC lib/json/json_parse.o 00:02:48.648 CC lib/env_dpdk/pci.o 00:02:48.648 CC lib/env_dpdk/init.o 00:02:48.648 CC lib/rdma_utils/rdma_utils.o 00:02:48.648 CC lib/idxd/idxd.o 00:02:48.648 CC lib/vmd/vmd.o 00:02:48.648 SYMLINK libspdk_trace_parser.so 00:02:48.648 CC lib/vmd/led.o 00:02:48.648 CC lib/env_dpdk/threads.o 00:02:48.648 CC lib/json/json_util.o 00:02:48.648 LIB libspdk_conf.a 00:02:48.648 LIB libspdk_rdma_utils.a 00:02:48.648 SO libspdk_conf.so.6.0 00:02:48.648 SO libspdk_rdma_utils.so.1.0 00:02:48.648 SYMLINK libspdk_rdma_utils.so 00:02:48.648 CC lib/env_dpdk/pci_ioat.o 00:02:48.648 CC lib/env_dpdk/pci_virtio.o 00:02:48.648 CC lib/env_dpdk/pci_vmd.o 00:02:48.648 CC lib/idxd/idxd_user.o 00:02:48.648 SYMLINK libspdk_conf.so 00:02:48.648 CC lib/env_dpdk/pci_idxd.o 00:02:48.648 CC lib/json/json_write.o 00:02:48.648 CC lib/env_dpdk/pci_event.o 00:02:48.648 CC lib/env_dpdk/sigbus_handler.o 00:02:48.648 CC lib/env_dpdk/pci_dpdk.o 00:02:48.648 CC lib/idxd/idxd_kernel.o 00:02:48.648 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:48.648 LIB libspdk_vmd.a 00:02:48.648 SO libspdk_vmd.so.6.0 00:02:48.648 CC lib/rdma_provider/common.o 00:02:48.648 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:48.648 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.648 SYMLINK libspdk_vmd.so 00:02:48.648 LIB libspdk_idxd.a 00:02:48.648 SO libspdk_idxd.so.12.1 00:02:48.648 LIB libspdk_json.a 00:02:48.648 SO libspdk_json.so.6.0 00:02:48.648 SYMLINK libspdk_idxd.so 00:02:48.648 SYMLINK libspdk_json.so 00:02:48.648 LIB libspdk_rdma_provider.a 00:02:48.648 SO libspdk_rdma_provider.so.7.0 00:02:48.648 SYMLINK libspdk_rdma_provider.so 00:02:48.648 CC lib/jsonrpc/jsonrpc_server.o 00:02:48.648 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:48.648 CC lib/jsonrpc/jsonrpc_client.o 00:02:48.648 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:48.648 LIB libspdk_jsonrpc.a 00:02:48.648 LIB libspdk_env_dpdk.a 00:02:48.648 SO libspdk_jsonrpc.so.6.0 00:02:48.648 SYMLINK libspdk_jsonrpc.so 00:02:48.648 SO libspdk_env_dpdk.so.15.1 00:02:48.648 SYMLINK libspdk_env_dpdk.so 00:02:48.906 CC lib/rpc/rpc.o 00:02:49.165 LIB libspdk_rpc.a 00:02:49.165 SO libspdk_rpc.so.6.0 00:02:49.165 SYMLINK libspdk_rpc.so 00:02:49.423 CC lib/keyring/keyring.o 00:02:49.423 CC lib/keyring/keyring_rpc.o 00:02:49.423 CC lib/trace/trace.o 00:02:49.423 CC lib/trace/trace_rpc.o 00:02:49.423 CC lib/trace/trace_flags.o 00:02:49.423 CC lib/notify/notify.o 00:02:49.423 CC lib/notify/notify_rpc.o 00:02:49.681 LIB libspdk_notify.a 00:02:49.681 SO libspdk_notify.so.6.0 00:02:49.681 LIB libspdk_keyring.a 00:02:49.681 LIB libspdk_trace.a 00:02:49.681 SYMLINK libspdk_notify.so 00:02:49.681 SO libspdk_keyring.so.2.0 00:02:49.681 SO libspdk_trace.so.11.0 00:02:49.681 SYMLINK libspdk_keyring.so 00:02:49.681 SYMLINK libspdk_trace.so 00:02:49.940 CC lib/sock/sock.o 00:02:49.940 CC lib/sock/sock_rpc.o 00:02:49.940 CC lib/thread/thread.o 00:02:49.940 CC lib/thread/iobuf.o 00:02:50.508 LIB libspdk_sock.a 00:02:50.508 SO libspdk_sock.so.10.0 00:02:50.508 SYMLINK libspdk_sock.so 00:02:50.767 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:50.767 CC lib/nvme/nvme_ns_cmd.o 00:02:50.767 CC lib/nvme/nvme_fabric.o 00:02:50.767 CC lib/nvme/nvme_ctrlr.o 00:02:50.767 CC lib/nvme/nvme_ns.o 00:02:50.767 CC lib/nvme/nvme_qpair.o 00:02:50.767 CC lib/nvme/nvme_pcie_common.o 00:02:50.767 CC lib/nvme/nvme_pcie.o 00:02:51.026 CC lib/nvme/nvme.o 00:02:51.593 LIB libspdk_thread.a 00:02:51.593 SO libspdk_thread.so.11.0 00:02:51.593 SYMLINK libspdk_thread.so 00:02:51.593 CC lib/nvme/nvme_quirks.o 00:02:51.852 CC lib/nvme/nvme_transport.o 00:02:51.852 CC lib/nvme/nvme_discovery.o 00:02:51.852 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.852 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.852 CC lib/nvme/nvme_tcp.o 00:02:51.852 CC lib/accel/accel.o 00:02:51.852 CC lib/nvme/nvme_opal.o 00:02:52.418 CC lib/blob/blobstore.o 00:02:52.418 CC lib/nvme/nvme_io_msg.o 00:02:52.418 CC lib/accel/accel_rpc.o 00:02:52.677 CC lib/nvme/nvme_poll_group.o 00:02:52.677 CC lib/init/json_config.o 00:02:52.677 CC lib/accel/accel_sw.o 00:02:52.677 CC lib/virtio/virtio.o 00:02:52.677 CC lib/fsdev/fsdev.o 00:02:52.935 CC lib/init/subsystem.o 00:02:52.935 CC lib/init/subsystem_rpc.o 00:02:52.935 CC lib/virtio/virtio_vhost_user.o 00:02:52.935 CC lib/init/rpc.o 00:02:52.935 CC lib/fsdev/fsdev_io.o 00:02:52.935 LIB libspdk_accel.a 00:02:52.935 SO libspdk_accel.so.16.0 00:02:53.194 CC lib/nvme/nvme_zns.o 00:02:53.194 SYMLINK libspdk_accel.so 00:02:53.194 CC lib/nvme/nvme_stubs.o 00:02:53.194 LIB libspdk_init.a 00:02:53.194 SO libspdk_init.so.6.0 00:02:53.194 CC lib/fsdev/fsdev_rpc.o 00:02:53.194 SYMLINK libspdk_init.so 00:02:53.194 CC lib/virtio/virtio_vfio_user.o 00:02:53.194 CC lib/virtio/virtio_pci.o 00:02:53.194 CC lib/blob/request.o 00:02:53.194 CC lib/blob/zeroes.o 00:02:53.452 CC lib/blob/blob_bs_dev.o 00:02:53.452 LIB libspdk_fsdev.a 00:02:53.452 SO libspdk_fsdev.so.2.0 00:02:53.452 SYMLINK libspdk_fsdev.so 00:02:53.452 CC lib/nvme/nvme_auth.o 00:02:53.452 LIB libspdk_virtio.a 00:02:53.711 CC lib/nvme/nvme_cuse.o 00:02:53.711 SO libspdk_virtio.so.7.0 00:02:53.711 CC lib/bdev/bdev.o 00:02:53.711 CC lib/event/app.o 00:02:53.711 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:53.711 CC lib/bdev/bdev_rpc.o 00:02:53.711 SYMLINK libspdk_virtio.so 00:02:53.711 CC lib/bdev/bdev_zone.o 00:02:53.711 CC lib/bdev/part.o 00:02:53.711 CC lib/bdev/scsi_nvme.o 00:02:53.970 CC lib/nvme/nvme_rdma.o 00:02:53.970 CC lib/event/reactor.o 00:02:53.970 CC lib/event/log_rpc.o 00:02:53.970 CC lib/event/app_rpc.o 00:02:53.970 CC lib/event/scheduler_static.o 00:02:54.229 LIB libspdk_fuse_dispatcher.a 00:02:54.229 SO libspdk_fuse_dispatcher.so.1.0 00:02:54.229 LIB libspdk_event.a 00:02:54.488 SYMLINK libspdk_fuse_dispatcher.so 00:02:54.488 SO libspdk_event.so.14.0 00:02:54.488 SYMLINK libspdk_event.so 00:02:55.425 LIB libspdk_nvme.a 00:02:55.425 LIB libspdk_blob.a 00:02:55.425 SO libspdk_blob.so.11.0 00:02:55.425 SO libspdk_nvme.so.15.0 00:02:55.425 SYMLINK libspdk_blob.so 00:02:55.684 SYMLINK libspdk_nvme.so 00:02:55.684 CC lib/blobfs/blobfs.o 00:02:55.684 CC lib/blobfs/tree.o 00:02:55.684 CC lib/lvol/lvol.o 00:02:56.257 LIB libspdk_bdev.a 00:02:56.518 SO libspdk_bdev.so.17.0 00:02:56.518 SYMLINK libspdk_bdev.so 00:02:56.776 CC lib/nvmf/ctrlr.o 00:02:56.776 CC lib/nvmf/ctrlr_discovery.o 00:02:56.776 CC lib/nvmf/subsystem.o 00:02:56.776 CC lib/ublk/ublk.o 00:02:56.776 CC lib/nvmf/ctrlr_bdev.o 00:02:56.776 LIB libspdk_blobfs.a 00:02:56.776 CC lib/ftl/ftl_core.o 00:02:56.776 CC lib/nbd/nbd.o 00:02:56.776 CC lib/scsi/dev.o 00:02:56.776 SO libspdk_blobfs.so.10.0 00:02:56.776 SYMLINK libspdk_blobfs.so 00:02:56.776 CC lib/scsi/lun.o 00:02:56.776 LIB libspdk_lvol.a 00:02:56.776 SO libspdk_lvol.so.10.0 00:02:57.035 SYMLINK libspdk_lvol.so 00:02:57.035 CC lib/ftl/ftl_init.o 00:02:57.035 CC lib/ftl/ftl_layout.o 00:02:57.035 CC lib/nbd/nbd_rpc.o 00:02:57.035 CC lib/nvmf/nvmf.o 00:02:57.035 CC lib/scsi/port.o 00:02:57.294 CC lib/ftl/ftl_debug.o 00:02:57.294 CC lib/nvmf/nvmf_rpc.o 00:02:57.294 LIB libspdk_nbd.a 00:02:57.294 CC lib/scsi/scsi.o 00:02:57.294 SO libspdk_nbd.so.7.0 00:02:57.294 CC lib/ublk/ublk_rpc.o 00:02:57.294 CC lib/scsi/scsi_bdev.o 00:02:57.294 CC lib/nvmf/transport.o 00:02:57.294 SYMLINK libspdk_nbd.so 00:02:57.294 CC lib/nvmf/tcp.o 00:02:57.294 CC lib/ftl/ftl_io.o 00:02:57.552 CC lib/ftl/ftl_sb.o 00:02:57.552 LIB libspdk_ublk.a 00:02:57.552 SO libspdk_ublk.so.3.0 00:02:57.552 SYMLINK libspdk_ublk.so 00:02:57.552 CC lib/scsi/scsi_pr.o 00:02:57.552 CC lib/scsi/scsi_rpc.o 00:02:57.810 CC lib/ftl/ftl_l2p.o 00:02:57.810 CC lib/scsi/task.o 00:02:57.810 CC lib/nvmf/stubs.o 00:02:57.810 CC lib/ftl/ftl_l2p_flat.o 00:02:57.810 CC lib/nvmf/mdns_server.o 00:02:58.071 CC lib/ftl/ftl_nv_cache.o 00:02:58.071 CC lib/nvmf/rdma.o 00:02:58.071 LIB libspdk_scsi.a 00:02:58.071 CC lib/nvmf/auth.o 00:02:58.071 CC lib/ftl/ftl_band.o 00:02:58.071 SO libspdk_scsi.so.9.0 00:02:58.071 CC lib/ftl/ftl_band_ops.o 00:02:58.330 SYMLINK libspdk_scsi.so 00:02:58.330 CC lib/ftl/ftl_writer.o 00:02:58.330 CC lib/ftl/ftl_rq.o 00:02:58.330 CC lib/iscsi/conn.o 00:02:58.589 CC lib/ftl/ftl_reloc.o 00:02:58.590 CC lib/iscsi/init_grp.o 00:02:58.590 CC lib/ftl/ftl_l2p_cache.o 00:02:58.590 CC lib/ftl/ftl_p2l.o 00:02:58.590 CC lib/vhost/vhost.o 00:02:58.848 CC lib/iscsi/iscsi.o 00:02:58.848 CC lib/iscsi/param.o 00:02:58.848 CC lib/iscsi/portal_grp.o 00:02:58.848 CC lib/iscsi/tgt_node.o 00:02:58.848 CC lib/vhost/vhost_rpc.o 00:02:59.107 CC lib/ftl/ftl_p2l_log.o 00:02:59.107 CC lib/ftl/mngt/ftl_mngt.o 00:02:59.107 CC lib/iscsi/iscsi_subsystem.o 00:02:59.107 CC lib/iscsi/iscsi_rpc.o 00:02:59.107 CC lib/vhost/vhost_scsi.o 00:02:59.452 CC lib/vhost/vhost_blk.o 00:02:59.452 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:59.452 CC lib/iscsi/task.o 00:02:59.452 CC lib/vhost/rte_vhost_user.o 00:02:59.452 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:59.755 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:59.755 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:59.755 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:59.755 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:59.755 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:59.755 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:59.755 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.014 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.014 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.014 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:00.014 LIB libspdk_nvmf.a 00:03:00.014 CC lib/ftl/utils/ftl_conf.o 00:03:00.014 CC lib/ftl/utils/ftl_md.o 00:03:00.273 CC lib/ftl/utils/ftl_mempool.o 00:03:00.273 LIB libspdk_iscsi.a 00:03:00.273 CC lib/ftl/utils/ftl_bitmap.o 00:03:00.273 SO libspdk_nvmf.so.20.0 00:03:00.273 CC lib/ftl/utils/ftl_property.o 00:03:00.273 SO libspdk_iscsi.so.8.0 00:03:00.273 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:00.273 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:00.273 SYMLINK libspdk_nvmf.so 00:03:00.273 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:00.273 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:00.273 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:00.531 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:00.531 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:00.531 SYMLINK libspdk_iscsi.so 00:03:00.531 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:00.531 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:00.531 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:00.531 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:00.531 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:00.531 LIB libspdk_vhost.a 00:03:00.531 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:00.531 CC lib/ftl/base/ftl_base_dev.o 00:03:00.531 CC lib/ftl/base/ftl_base_bdev.o 00:03:00.789 CC lib/ftl/ftl_trace.o 00:03:00.789 SO libspdk_vhost.so.8.0 00:03:00.789 SYMLINK libspdk_vhost.so 00:03:01.048 LIB libspdk_ftl.a 00:03:01.307 SO libspdk_ftl.so.9.0 00:03:01.566 SYMLINK libspdk_ftl.so 00:03:01.825 CC module/env_dpdk/env_dpdk_rpc.o 00:03:01.825 CC module/keyring/file/keyring.o 00:03:01.825 CC module/fsdev/aio/fsdev_aio.o 00:03:01.825 CC module/keyring/linux/keyring.o 00:03:01.825 CC module/blob/bdev/blob_bdev.o 00:03:01.825 CC module/sock/posix/posix.o 00:03:01.825 CC module/accel/ioat/accel_ioat.o 00:03:01.825 CC module/accel/error/accel_error.o 00:03:01.825 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:01.825 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:01.825 LIB libspdk_env_dpdk_rpc.a 00:03:01.825 SO libspdk_env_dpdk_rpc.so.6.0 00:03:02.083 SYMLINK libspdk_env_dpdk_rpc.so 00:03:02.083 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:02.083 CC module/keyring/linux/keyring_rpc.o 00:03:02.083 CC module/keyring/file/keyring_rpc.o 00:03:02.083 LIB libspdk_scheduler_dpdk_governor.a 00:03:02.083 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:02.083 CC module/accel/ioat/accel_ioat_rpc.o 00:03:02.083 CC module/accel/error/accel_error_rpc.o 00:03:02.083 LIB libspdk_blob_bdev.a 00:03:02.083 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:02.083 LIB libspdk_scheduler_dynamic.a 00:03:02.083 LIB libspdk_keyring_linux.a 00:03:02.083 CC module/fsdev/aio/linux_aio_mgr.o 00:03:02.083 LIB libspdk_keyring_file.a 00:03:02.083 SO libspdk_blob_bdev.so.11.0 00:03:02.083 SO libspdk_scheduler_dynamic.so.4.0 00:03:02.083 SO libspdk_keyring_file.so.2.0 00:03:02.083 SO libspdk_keyring_linux.so.1.0 00:03:02.341 LIB libspdk_accel_ioat.a 00:03:02.341 SYMLINK libspdk_scheduler_dynamic.so 00:03:02.341 SYMLINK libspdk_blob_bdev.so 00:03:02.341 LIB libspdk_accel_error.a 00:03:02.341 SYMLINK libspdk_keyring_linux.so 00:03:02.341 SO libspdk_accel_ioat.so.6.0 00:03:02.341 SYMLINK libspdk_keyring_file.so 00:03:02.341 SO libspdk_accel_error.so.2.0 00:03:02.341 SYMLINK libspdk_accel_ioat.so 00:03:02.341 SYMLINK libspdk_accel_error.so 00:03:02.341 CC module/scheduler/gscheduler/gscheduler.o 00:03:02.341 CC module/sock/uring/uring.o 00:03:02.598 CC module/accel/dsa/accel_dsa.o 00:03:02.598 CC module/accel/iaa/accel_iaa.o 00:03:02.598 CC module/bdev/delay/vbdev_delay.o 00:03:02.598 LIB libspdk_fsdev_aio.a 00:03:02.598 LIB libspdk_scheduler_gscheduler.a 00:03:02.598 CC module/bdev/error/vbdev_error.o 00:03:02.598 CC module/bdev/gpt/gpt.o 00:03:02.598 CC module/blobfs/bdev/blobfs_bdev.o 00:03:02.598 SO libspdk_scheduler_gscheduler.so.4.0 00:03:02.598 LIB libspdk_sock_posix.a 00:03:02.598 SO libspdk_fsdev_aio.so.1.0 00:03:02.598 SO libspdk_sock_posix.so.6.0 00:03:02.598 SYMLINK libspdk_scheduler_gscheduler.so 00:03:02.598 CC module/bdev/gpt/vbdev_gpt.o 00:03:02.598 SYMLINK libspdk_fsdev_aio.so 00:03:02.599 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:02.599 SYMLINK libspdk_sock_posix.so 00:03:02.857 CC module/accel/iaa/accel_iaa_rpc.o 00:03:02.857 CC module/bdev/error/vbdev_error_rpc.o 00:03:02.857 CC module/accel/dsa/accel_dsa_rpc.o 00:03:02.857 LIB libspdk_blobfs_bdev.a 00:03:02.857 LIB libspdk_accel_iaa.a 00:03:02.857 CC module/bdev/lvol/vbdev_lvol.o 00:03:02.857 SO libspdk_blobfs_bdev.so.6.0 00:03:02.857 SO libspdk_accel_iaa.so.3.0 00:03:02.857 CC module/bdev/malloc/bdev_malloc.o 00:03:02.857 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:02.857 CC module/bdev/null/bdev_null.o 00:03:02.857 LIB libspdk_bdev_gpt.a 00:03:03.115 SYMLINK libspdk_blobfs_bdev.so 00:03:03.115 SYMLINK libspdk_accel_iaa.so 00:03:03.115 CC module/bdev/null/bdev_null_rpc.o 00:03:03.115 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:03.115 SO libspdk_bdev_gpt.so.6.0 00:03:03.115 LIB libspdk_accel_dsa.a 00:03:03.115 LIB libspdk_bdev_error.a 00:03:03.115 SO libspdk_accel_dsa.so.5.0 00:03:03.115 SO libspdk_bdev_error.so.6.0 00:03:03.115 SYMLINK libspdk_bdev_gpt.so 00:03:03.115 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:03.115 SYMLINK libspdk_bdev_error.so 00:03:03.115 LIB libspdk_bdev_delay.a 00:03:03.115 SYMLINK libspdk_accel_dsa.so 00:03:03.115 LIB libspdk_sock_uring.a 00:03:03.115 SO libspdk_bdev_delay.so.6.0 00:03:03.115 SO libspdk_sock_uring.so.5.0 00:03:03.115 SYMLINK libspdk_bdev_delay.so 00:03:03.372 SYMLINK libspdk_sock_uring.so 00:03:03.372 LIB libspdk_bdev_null.a 00:03:03.372 CC module/bdev/nvme/bdev_nvme.o 00:03:03.372 SO libspdk_bdev_null.so.6.0 00:03:03.372 CC module/bdev/passthru/vbdev_passthru.o 00:03:03.372 LIB libspdk_bdev_malloc.a 00:03:03.372 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:03.372 SYMLINK libspdk_bdev_null.so 00:03:03.372 SO libspdk_bdev_malloc.so.6.0 00:03:03.372 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:03.372 CC module/bdev/raid/bdev_raid.o 00:03:03.372 CC module/bdev/split/vbdev_split.o 00:03:03.372 LIB libspdk_bdev_lvol.a 00:03:03.372 SYMLINK libspdk_bdev_malloc.so 00:03:03.372 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:03.372 SO libspdk_bdev_lvol.so.6.0 00:03:03.630 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:03.630 SYMLINK libspdk_bdev_lvol.so 00:03:03.630 CC module/bdev/split/vbdev_split_rpc.o 00:03:03.630 CC module/bdev/uring/bdev_uring.o 00:03:03.630 CC module/bdev/aio/bdev_aio.o 00:03:03.630 LIB libspdk_bdev_passthru.a 00:03:03.630 CC module/bdev/uring/bdev_uring_rpc.o 00:03:03.630 SO libspdk_bdev_passthru.so.6.0 00:03:03.630 CC module/bdev/aio/bdev_aio_rpc.o 00:03:03.630 LIB libspdk_bdev_split.a 00:03:03.630 SYMLINK libspdk_bdev_passthru.so 00:03:03.630 SO libspdk_bdev_split.so.6.0 00:03:03.888 LIB libspdk_bdev_zone_block.a 00:03:03.888 SO libspdk_bdev_zone_block.so.6.0 00:03:03.888 SYMLINK libspdk_bdev_split.so 00:03:03.888 CC module/bdev/raid/bdev_raid_rpc.o 00:03:03.888 SYMLINK libspdk_bdev_zone_block.so 00:03:03.888 CC module/bdev/raid/bdev_raid_sb.o 00:03:03.888 CC module/bdev/ftl/bdev_ftl.o 00:03:03.888 LIB libspdk_bdev_uring.a 00:03:03.888 LIB libspdk_bdev_aio.a 00:03:03.888 SO libspdk_bdev_uring.so.6.0 00:03:03.888 SO libspdk_bdev_aio.so.6.0 00:03:03.888 CC module/bdev/iscsi/bdev_iscsi.o 00:03:03.888 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.146 SYMLINK libspdk_bdev_uring.so 00:03:04.146 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.146 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.146 SYMLINK libspdk_bdev_aio.so 00:03:04.146 CC module/bdev/raid/raid0.o 00:03:04.146 CC module/bdev/raid/raid1.o 00:03:04.146 CC module/bdev/raid/concat.o 00:03:04.146 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.146 CC module/bdev/nvme/nvme_rpc.o 00:03:04.470 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.470 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.470 CC module/bdev/nvme/vbdev_opal.o 00:03:04.470 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.470 LIB libspdk_bdev_ftl.a 00:03:04.470 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.470 LIB libspdk_bdev_raid.a 00:03:04.470 SO libspdk_bdev_ftl.so.6.0 00:03:04.470 LIB libspdk_bdev_iscsi.a 00:03:04.470 SO libspdk_bdev_iscsi.so.6.0 00:03:04.470 SO libspdk_bdev_raid.so.6.0 00:03:04.470 SYMLINK libspdk_bdev_ftl.so 00:03:04.470 LIB libspdk_bdev_virtio.a 00:03:04.470 SO libspdk_bdev_virtio.so.6.0 00:03:04.470 SYMLINK libspdk_bdev_iscsi.so 00:03:04.728 SYMLINK libspdk_bdev_raid.so 00:03:04.728 SYMLINK libspdk_bdev_virtio.so 00:03:06.103 LIB libspdk_bdev_nvme.a 00:03:06.103 SO libspdk_bdev_nvme.so.7.1 00:03:06.103 SYMLINK libspdk_bdev_nvme.so 00:03:06.668 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:06.668 CC module/event/subsystems/vmd/vmd.o 00:03:06.668 CC module/event/subsystems/sock/sock.o 00:03:06.668 CC module/event/subsystems/fsdev/fsdev.o 00:03:06.668 CC module/event/subsystems/iobuf/iobuf.o 00:03:06.668 CC module/event/subsystems/scheduler/scheduler.o 00:03:06.668 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:06.668 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:06.668 CC module/event/subsystems/keyring/keyring.o 00:03:06.668 LIB libspdk_event_vhost_blk.a 00:03:06.668 LIB libspdk_event_scheduler.a 00:03:06.668 LIB libspdk_event_sock.a 00:03:06.668 LIB libspdk_event_keyring.a 00:03:06.668 LIB libspdk_event_fsdev.a 00:03:06.668 LIB libspdk_event_vmd.a 00:03:06.668 SO libspdk_event_scheduler.so.4.0 00:03:06.668 SO libspdk_event_sock.so.5.0 00:03:06.668 SO libspdk_event_vhost_blk.so.3.0 00:03:06.668 SO libspdk_event_keyring.so.1.0 00:03:06.668 SO libspdk_event_fsdev.so.1.0 00:03:06.668 LIB libspdk_event_iobuf.a 00:03:06.668 SO libspdk_event_vmd.so.6.0 00:03:06.668 SYMLINK libspdk_event_vhost_blk.so 00:03:06.668 SYMLINK libspdk_event_scheduler.so 00:03:06.668 SYMLINK libspdk_event_sock.so 00:03:06.668 SO libspdk_event_iobuf.so.3.0 00:03:06.668 SYMLINK libspdk_event_keyring.so 00:03:06.668 SYMLINK libspdk_event_fsdev.so 00:03:06.668 SYMLINK libspdk_event_vmd.so 00:03:06.668 SYMLINK libspdk_event_iobuf.so 00:03:06.925 CC module/event/subsystems/accel/accel.o 00:03:07.183 LIB libspdk_event_accel.a 00:03:07.183 SO libspdk_event_accel.so.6.0 00:03:07.183 SYMLINK libspdk_event_accel.so 00:03:07.441 CC module/event/subsystems/bdev/bdev.o 00:03:07.698 LIB libspdk_event_bdev.a 00:03:07.698 SO libspdk_event_bdev.so.6.0 00:03:07.698 SYMLINK libspdk_event_bdev.so 00:03:07.956 CC module/event/subsystems/ublk/ublk.o 00:03:07.956 CC module/event/subsystems/scsi/scsi.o 00:03:07.956 CC module/event/subsystems/nbd/nbd.o 00:03:07.956 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:07.956 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:08.215 LIB libspdk_event_ublk.a 00:03:08.215 LIB libspdk_event_nbd.a 00:03:08.215 LIB libspdk_event_scsi.a 00:03:08.215 SO libspdk_event_ublk.so.3.0 00:03:08.215 SO libspdk_event_nbd.so.6.0 00:03:08.215 SO libspdk_event_scsi.so.6.0 00:03:08.215 SYMLINK libspdk_event_ublk.so 00:03:08.215 SYMLINK libspdk_event_nbd.so 00:03:08.215 SYMLINK libspdk_event_scsi.so 00:03:08.215 LIB libspdk_event_nvmf.a 00:03:08.474 SO libspdk_event_nvmf.so.6.0 00:03:08.474 SYMLINK libspdk_event_nvmf.so 00:03:08.474 CC module/event/subsystems/iscsi/iscsi.o 00:03:08.474 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:08.732 LIB libspdk_event_iscsi.a 00:03:08.732 LIB libspdk_event_vhost_scsi.a 00:03:08.732 SO libspdk_event_iscsi.so.6.0 00:03:08.732 SO libspdk_event_vhost_scsi.so.3.0 00:03:08.732 SYMLINK libspdk_event_iscsi.so 00:03:08.732 SYMLINK libspdk_event_vhost_scsi.so 00:03:08.990 SO libspdk.so.6.0 00:03:08.990 SYMLINK libspdk.so 00:03:09.249 CC test/rpc_client/rpc_client_test.o 00:03:09.249 TEST_HEADER include/spdk/accel.h 00:03:09.249 TEST_HEADER include/spdk/accel_module.h 00:03:09.249 TEST_HEADER include/spdk/assert.h 00:03:09.249 TEST_HEADER include/spdk/barrier.h 00:03:09.249 TEST_HEADER include/spdk/base64.h 00:03:09.249 CXX app/trace/trace.o 00:03:09.249 TEST_HEADER include/spdk/bdev.h 00:03:09.249 TEST_HEADER include/spdk/bdev_module.h 00:03:09.249 CC app/trace_record/trace_record.o 00:03:09.249 TEST_HEADER include/spdk/bdev_zone.h 00:03:09.249 TEST_HEADER include/spdk/bit_array.h 00:03:09.249 TEST_HEADER include/spdk/bit_pool.h 00:03:09.249 TEST_HEADER include/spdk/blob_bdev.h 00:03:09.249 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:09.249 TEST_HEADER include/spdk/blobfs.h 00:03:09.249 TEST_HEADER include/spdk/blob.h 00:03:09.249 TEST_HEADER include/spdk/conf.h 00:03:09.249 TEST_HEADER include/spdk/config.h 00:03:09.249 TEST_HEADER include/spdk/cpuset.h 00:03:09.249 TEST_HEADER include/spdk/crc16.h 00:03:09.249 TEST_HEADER include/spdk/crc32.h 00:03:09.249 TEST_HEADER include/spdk/crc64.h 00:03:09.249 TEST_HEADER include/spdk/dif.h 00:03:09.249 TEST_HEADER include/spdk/dma.h 00:03:09.249 TEST_HEADER include/spdk/endian.h 00:03:09.249 TEST_HEADER include/spdk/env_dpdk.h 00:03:09.249 TEST_HEADER include/spdk/env.h 00:03:09.249 TEST_HEADER include/spdk/event.h 00:03:09.249 TEST_HEADER include/spdk/fd_group.h 00:03:09.249 TEST_HEADER include/spdk/fd.h 00:03:09.249 TEST_HEADER include/spdk/file.h 00:03:09.249 TEST_HEADER include/spdk/fsdev.h 00:03:09.249 TEST_HEADER include/spdk/fsdev_module.h 00:03:09.249 TEST_HEADER include/spdk/ftl.h 00:03:09.249 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:09.249 TEST_HEADER include/spdk/gpt_spec.h 00:03:09.249 CC app/nvmf_tgt/nvmf_main.o 00:03:09.249 TEST_HEADER include/spdk/hexlify.h 00:03:09.249 TEST_HEADER include/spdk/histogram_data.h 00:03:09.249 TEST_HEADER include/spdk/idxd.h 00:03:09.249 TEST_HEADER include/spdk/idxd_spec.h 00:03:09.249 TEST_HEADER include/spdk/init.h 00:03:09.249 TEST_HEADER include/spdk/ioat.h 00:03:09.249 TEST_HEADER include/spdk/ioat_spec.h 00:03:09.249 TEST_HEADER include/spdk/iscsi_spec.h 00:03:09.249 TEST_HEADER include/spdk/json.h 00:03:09.249 TEST_HEADER include/spdk/jsonrpc.h 00:03:09.249 TEST_HEADER include/spdk/keyring.h 00:03:09.249 TEST_HEADER include/spdk/keyring_module.h 00:03:09.249 CC examples/util/zipf/zipf.o 00:03:09.249 TEST_HEADER include/spdk/likely.h 00:03:09.249 CC test/thread/poller_perf/poller_perf.o 00:03:09.249 TEST_HEADER include/spdk/log.h 00:03:09.249 TEST_HEADER include/spdk/lvol.h 00:03:09.249 TEST_HEADER include/spdk/md5.h 00:03:09.249 TEST_HEADER include/spdk/memory.h 00:03:09.249 TEST_HEADER include/spdk/mmio.h 00:03:09.249 TEST_HEADER include/spdk/nbd.h 00:03:09.249 TEST_HEADER include/spdk/net.h 00:03:09.249 TEST_HEADER include/spdk/notify.h 00:03:09.249 TEST_HEADER include/spdk/nvme.h 00:03:09.249 TEST_HEADER include/spdk/nvme_intel.h 00:03:09.249 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:09.249 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:09.249 TEST_HEADER include/spdk/nvme_spec.h 00:03:09.249 TEST_HEADER include/spdk/nvme_zns.h 00:03:09.249 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:09.249 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:09.249 TEST_HEADER include/spdk/nvmf.h 00:03:09.249 TEST_HEADER include/spdk/nvmf_spec.h 00:03:09.508 TEST_HEADER include/spdk/nvmf_transport.h 00:03:09.508 CC test/app/bdev_svc/bdev_svc.o 00:03:09.508 TEST_HEADER include/spdk/opal.h 00:03:09.508 TEST_HEADER include/spdk/opal_spec.h 00:03:09.508 TEST_HEADER include/spdk/pci_ids.h 00:03:09.508 TEST_HEADER include/spdk/pipe.h 00:03:09.508 TEST_HEADER include/spdk/queue.h 00:03:09.508 TEST_HEADER include/spdk/reduce.h 00:03:09.508 TEST_HEADER include/spdk/rpc.h 00:03:09.508 TEST_HEADER include/spdk/scheduler.h 00:03:09.508 TEST_HEADER include/spdk/scsi.h 00:03:09.508 TEST_HEADER include/spdk/scsi_spec.h 00:03:09.508 TEST_HEADER include/spdk/sock.h 00:03:09.508 TEST_HEADER include/spdk/stdinc.h 00:03:09.508 TEST_HEADER include/spdk/string.h 00:03:09.508 TEST_HEADER include/spdk/thread.h 00:03:09.508 TEST_HEADER include/spdk/trace.h 00:03:09.508 TEST_HEADER include/spdk/trace_parser.h 00:03:09.508 TEST_HEADER include/spdk/tree.h 00:03:09.508 CC test/dma/test_dma/test_dma.o 00:03:09.508 TEST_HEADER include/spdk/ublk.h 00:03:09.508 TEST_HEADER include/spdk/util.h 00:03:09.508 TEST_HEADER include/spdk/uuid.h 00:03:09.508 CC test/env/mem_callbacks/mem_callbacks.o 00:03:09.508 TEST_HEADER include/spdk/version.h 00:03:09.508 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:09.508 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:09.508 TEST_HEADER include/spdk/vhost.h 00:03:09.508 TEST_HEADER include/spdk/vmd.h 00:03:09.508 LINK rpc_client_test 00:03:09.508 TEST_HEADER include/spdk/xor.h 00:03:09.508 TEST_HEADER include/spdk/zipf.h 00:03:09.508 CXX test/cpp_headers/accel.o 00:03:09.508 LINK zipf 00:03:09.508 LINK poller_perf 00:03:09.508 LINK spdk_trace_record 00:03:09.508 LINK bdev_svc 00:03:09.508 LINK nvmf_tgt 00:03:09.508 CXX test/cpp_headers/accel_module.o 00:03:09.766 LINK spdk_trace 00:03:09.766 CXX test/cpp_headers/assert.o 00:03:09.766 CC examples/vmd/lsvmd/lsvmd.o 00:03:09.766 CC examples/ioat/perf/perf.o 00:03:10.024 CXX test/cpp_headers/barrier.o 00:03:10.024 CC examples/idxd/perf/perf.o 00:03:10.024 CC test/env/vtophys/vtophys.o 00:03:10.024 CC examples/ioat/verify/verify.o 00:03:10.024 LINK test_dma 00:03:10.024 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:10.024 CC app/iscsi_tgt/iscsi_tgt.o 00:03:10.024 LINK lsvmd 00:03:10.024 LINK vtophys 00:03:10.024 CXX test/cpp_headers/base64.o 00:03:10.024 LINK mem_callbacks 00:03:10.024 LINK ioat_perf 00:03:10.282 LINK verify 00:03:10.282 CXX test/cpp_headers/bdev.o 00:03:10.282 LINK idxd_perf 00:03:10.282 LINK iscsi_tgt 00:03:10.282 CC examples/vmd/led/led.o 00:03:10.282 CC test/app/histogram_perf/histogram_perf.o 00:03:10.282 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:10.282 CC test/env/memory/memory_ut.o 00:03:10.282 CXX test/cpp_headers/bdev_module.o 00:03:10.282 CXX test/cpp_headers/bdev_zone.o 00:03:10.540 LINK nvme_fuzz 00:03:10.540 CC app/spdk_tgt/spdk_tgt.o 00:03:10.540 CC test/event/event_perf/event_perf.o 00:03:10.540 LINK led 00:03:10.540 LINK histogram_perf 00:03:10.540 LINK env_dpdk_post_init 00:03:10.540 CC test/env/pci/pci_ut.o 00:03:10.540 CXX test/cpp_headers/bit_array.o 00:03:10.540 LINK event_perf 00:03:10.540 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:10.798 LINK spdk_tgt 00:03:10.798 CC test/app/jsoncat/jsoncat.o 00:03:10.798 CC test/app/stub/stub.o 00:03:10.798 CXX test/cpp_headers/bit_pool.o 00:03:10.798 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.798 LINK jsoncat 00:03:10.798 CC test/event/reactor/reactor.o 00:03:10.798 CC examples/thread/thread/thread_ex.o 00:03:11.056 LINK stub 00:03:11.056 LINK pci_ut 00:03:11.056 CXX test/cpp_headers/blob_bdev.o 00:03:11.056 CC app/spdk_lspci/spdk_lspci.o 00:03:11.056 LINK interrupt_tgt 00:03:11.056 LINK reactor 00:03:11.056 CC app/spdk_nvme_perf/perf.o 00:03:11.056 LINK spdk_lspci 00:03:11.056 LINK thread 00:03:11.056 CXX test/cpp_headers/blobfs_bdev.o 00:03:11.314 CC app/spdk_nvme_identify/identify.o 00:03:11.314 CC app/spdk_nvme_discover/discovery_aer.o 00:03:11.314 CC test/event/reactor_perf/reactor_perf.o 00:03:11.314 CC app/spdk_top/spdk_top.o 00:03:11.314 CXX test/cpp_headers/blobfs.o 00:03:11.314 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.314 LINK reactor_perf 00:03:11.571 CC examples/sock/hello_world/hello_sock.o 00:03:11.571 LINK spdk_nvme_discover 00:03:11.571 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.572 CXX test/cpp_headers/blob.o 00:03:11.572 LINK memory_ut 00:03:11.572 CC test/event/app_repeat/app_repeat.o 00:03:11.830 CXX test/cpp_headers/conf.o 00:03:11.830 LINK hello_sock 00:03:11.830 LINK app_repeat 00:03:11.830 CC test/event/scheduler/scheduler.o 00:03:11.830 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:11.830 CXX test/cpp_headers/config.o 00:03:11.830 CXX test/cpp_headers/cpuset.o 00:03:11.830 LINK vhost_fuzz 00:03:12.088 LINK spdk_nvme_perf 00:03:12.088 LINK spdk_nvme_identify 00:03:12.088 CXX test/cpp_headers/crc16.o 00:03:12.088 CC test/nvme/aer/aer.o 00:03:12.088 CXX test/cpp_headers/crc32.o 00:03:12.088 LINK scheduler 00:03:12.088 CC test/nvme/reset/reset.o 00:03:12.088 LINK spdk_top 00:03:12.088 LINK hello_fsdev 00:03:12.347 CXX test/cpp_headers/crc64.o 00:03:12.347 LINK iscsi_fuzz 00:03:12.347 CC app/vhost/vhost.o 00:03:12.347 CC test/accel/dif/dif.o 00:03:12.347 CC app/spdk_dd/spdk_dd.o 00:03:12.347 LINK aer 00:03:12.347 LINK reset 00:03:12.347 CC test/nvme/sgl/sgl.o 00:03:12.605 CXX test/cpp_headers/dif.o 00:03:12.605 CC test/blobfs/mkfs/mkfs.o 00:03:12.605 LINK vhost 00:03:12.605 CC examples/accel/perf/accel_perf.o 00:03:12.605 CC test/nvme/e2edp/nvme_dp.o 00:03:12.605 CXX test/cpp_headers/dma.o 00:03:12.605 LINK sgl 00:03:12.864 CC examples/blob/hello_world/hello_blob.o 00:03:12.864 CC app/fio/nvme/fio_plugin.o 00:03:12.864 LINK mkfs 00:03:12.864 CXX test/cpp_headers/endian.o 00:03:12.864 LINK spdk_dd 00:03:12.864 CXX test/cpp_headers/env_dpdk.o 00:03:12.864 LINK nvme_dp 00:03:12.864 CXX test/cpp_headers/env.o 00:03:13.122 LINK hello_blob 00:03:13.122 CC test/lvol/esnap/esnap.o 00:03:13.122 LINK dif 00:03:13.122 LINK accel_perf 00:03:13.122 CXX test/cpp_headers/event.o 00:03:13.122 CC test/nvme/overhead/overhead.o 00:03:13.122 CC test/nvme/err_injection/err_injection.o 00:03:13.122 CC test/nvme/startup/startup.o 00:03:13.122 CC test/nvme/reserve/reserve.o 00:03:13.380 CXX test/cpp_headers/fd_group.o 00:03:13.380 LINK spdk_nvme 00:03:13.380 CC examples/blob/cli/blobcli.o 00:03:13.380 CC app/fio/bdev/fio_plugin.o 00:03:13.380 LINK err_injection 00:03:13.380 LINK startup 00:03:13.380 CC examples/nvme/hello_world/hello_world.o 00:03:13.380 LINK overhead 00:03:13.380 LINK reserve 00:03:13.380 CXX test/cpp_headers/fd.o 00:03:13.638 CC examples/nvme/reconnect/reconnect.o 00:03:13.638 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:13.638 CXX test/cpp_headers/file.o 00:03:13.638 LINK hello_world 00:03:13.638 CC test/nvme/simple_copy/simple_copy.o 00:03:13.897 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.897 CC test/bdev/bdevio/bdevio.o 00:03:13.897 CXX test/cpp_headers/fsdev.o 00:03:13.897 LINK blobcli 00:03:13.897 LINK reconnect 00:03:13.897 LINK spdk_bdev 00:03:13.897 CC examples/nvme/arbitration/arbitration.o 00:03:13.897 LINK simple_copy 00:03:14.156 CXX test/cpp_headers/fsdev_module.o 00:03:14.156 LINK hello_bdev 00:03:14.156 CC examples/nvme/hotplug/hotplug.o 00:03:14.156 LINK nvme_manage 00:03:14.156 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:14.156 CC examples/nvme/abort/abort.o 00:03:14.156 LINK bdevio 00:03:14.156 CXX test/cpp_headers/ftl.o 00:03:14.156 CC test/nvme/connect_stress/connect_stress.o 00:03:14.415 LINK arbitration 00:03:14.415 LINK cmb_copy 00:03:14.415 LINK hotplug 00:03:14.415 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:14.415 CC examples/bdev/bdevperf/bdevperf.o 00:03:14.415 CXX test/cpp_headers/fuse_dispatcher.o 00:03:14.415 LINK connect_stress 00:03:14.415 CC test/nvme/boot_partition/boot_partition.o 00:03:14.678 LINK abort 00:03:14.678 CC test/nvme/compliance/nvme_compliance.o 00:03:14.678 CC test/nvme/fused_ordering/fused_ordering.o 00:03:14.678 LINK pmr_persistence 00:03:14.678 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:14.678 CXX test/cpp_headers/gpt_spec.o 00:03:14.678 CXX test/cpp_headers/hexlify.o 00:03:14.678 CXX test/cpp_headers/histogram_data.o 00:03:14.678 LINK boot_partition 00:03:14.936 CXX test/cpp_headers/idxd.o 00:03:14.936 LINK fused_ordering 00:03:14.936 LINK doorbell_aers 00:03:14.936 CC test/nvme/fdp/fdp.o 00:03:14.936 CXX test/cpp_headers/idxd_spec.o 00:03:14.936 CXX test/cpp_headers/init.o 00:03:14.936 LINK nvme_compliance 00:03:14.936 CC test/nvme/cuse/cuse.o 00:03:14.936 CXX test/cpp_headers/ioat.o 00:03:14.936 CXX test/cpp_headers/ioat_spec.o 00:03:14.936 CXX test/cpp_headers/iscsi_spec.o 00:03:15.194 CXX test/cpp_headers/json.o 00:03:15.194 CXX test/cpp_headers/jsonrpc.o 00:03:15.194 CXX test/cpp_headers/keyring.o 00:03:15.194 CXX test/cpp_headers/keyring_module.o 00:03:15.194 CXX test/cpp_headers/likely.o 00:03:15.194 CXX test/cpp_headers/log.o 00:03:15.194 LINK fdp 00:03:15.194 CXX test/cpp_headers/lvol.o 00:03:15.194 CXX test/cpp_headers/md5.o 00:03:15.194 LINK bdevperf 00:03:15.194 CXX test/cpp_headers/memory.o 00:03:15.451 CXX test/cpp_headers/mmio.o 00:03:15.451 CXX test/cpp_headers/nbd.o 00:03:15.451 CXX test/cpp_headers/net.o 00:03:15.451 CXX test/cpp_headers/notify.o 00:03:15.451 CXX test/cpp_headers/nvme.o 00:03:15.451 CXX test/cpp_headers/nvme_intel.o 00:03:15.451 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.451 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:15.451 CXX test/cpp_headers/nvme_spec.o 00:03:15.451 CXX test/cpp_headers/nvme_zns.o 00:03:15.451 CXX test/cpp_headers/nvmf_cmd.o 00:03:15.710 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:15.710 CXX test/cpp_headers/nvmf.o 00:03:15.710 CXX test/cpp_headers/nvmf_spec.o 00:03:15.710 CC examples/nvmf/nvmf/nvmf.o 00:03:15.710 CXX test/cpp_headers/nvmf_transport.o 00:03:15.710 CXX test/cpp_headers/opal.o 00:03:15.710 CXX test/cpp_headers/opal_spec.o 00:03:15.710 CXX test/cpp_headers/pci_ids.o 00:03:15.710 CXX test/cpp_headers/pipe.o 00:03:15.710 CXX test/cpp_headers/queue.o 00:03:15.710 CXX test/cpp_headers/reduce.o 00:03:15.710 CXX test/cpp_headers/rpc.o 00:03:15.968 CXX test/cpp_headers/scheduler.o 00:03:15.968 CXX test/cpp_headers/scsi.o 00:03:15.968 CXX test/cpp_headers/scsi_spec.o 00:03:15.968 CXX test/cpp_headers/sock.o 00:03:15.968 CXX test/cpp_headers/stdinc.o 00:03:15.968 LINK nvmf 00:03:15.968 CXX test/cpp_headers/string.o 00:03:15.968 CXX test/cpp_headers/thread.o 00:03:16.226 CXX test/cpp_headers/trace.o 00:03:16.226 CXX test/cpp_headers/trace_parser.o 00:03:16.226 CXX test/cpp_headers/tree.o 00:03:16.226 CXX test/cpp_headers/ublk.o 00:03:16.226 CXX test/cpp_headers/util.o 00:03:16.226 CXX test/cpp_headers/uuid.o 00:03:16.226 CXX test/cpp_headers/version.o 00:03:16.226 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.226 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.226 CXX test/cpp_headers/vhost.o 00:03:16.226 CXX test/cpp_headers/vmd.o 00:03:16.226 LINK cuse 00:03:16.226 CXX test/cpp_headers/xor.o 00:03:16.226 CXX test/cpp_headers/zipf.o 00:03:18.128 LINK esnap 00:03:18.386 00:03:18.386 real 1m27.586s 00:03:18.386 user 8m12.445s 00:03:18.386 sys 1m30.937s 00:03:18.386 12:38:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:18.386 12:38:27 make -- common/autotest_common.sh@10 -- $ set +x 00:03:18.386 ************************************ 00:03:18.386 END TEST make 00:03:18.386 ************************************ 00:03:18.645 12:38:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:18.645 12:38:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:18.645 12:38:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:18.645 12:38:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.645 12:38:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:18.645 12:38:27 -- pm/common@44 -- $ pid=5316 00:03:18.645 12:38:27 -- pm/common@50 -- $ kill -TERM 5316 00:03:18.645 12:38:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.645 12:38:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:18.645 12:38:27 -- pm/common@44 -- $ pid=5318 00:03:18.645 12:38:27 -- pm/common@50 -- $ kill -TERM 5318 00:03:18.645 12:38:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:18.645 12:38:27 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:18.645 12:38:27 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:18.645 12:38:27 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:18.645 12:38:27 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:18.645 12:38:27 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:18.645 12:38:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:18.645 12:38:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:18.645 12:38:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:18.645 12:38:27 -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.645 12:38:27 -- scripts/common.sh@336 -- # read -ra ver1 00:03:18.645 12:38:27 -- scripts/common.sh@337 -- # IFS=.-: 00:03:18.645 12:38:27 -- scripts/common.sh@337 -- # read -ra ver2 00:03:18.645 12:38:27 -- scripts/common.sh@338 -- # local 'op=<' 00:03:18.645 12:38:27 -- scripts/common.sh@340 -- # ver1_l=2 00:03:18.645 12:38:27 -- scripts/common.sh@341 -- # ver2_l=1 00:03:18.645 12:38:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:18.645 12:38:27 -- scripts/common.sh@344 -- # case "$op" in 00:03:18.645 12:38:27 -- scripts/common.sh@345 -- # : 1 00:03:18.645 12:38:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:18.646 12:38:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.646 12:38:27 -- scripts/common.sh@365 -- # decimal 1 00:03:18.646 12:38:27 -- scripts/common.sh@353 -- # local d=1 00:03:18.646 12:38:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.646 12:38:27 -- scripts/common.sh@355 -- # echo 1 00:03:18.646 12:38:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:18.646 12:38:27 -- scripts/common.sh@366 -- # decimal 2 00:03:18.646 12:38:27 -- scripts/common.sh@353 -- # local d=2 00:03:18.646 12:38:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.646 12:38:27 -- scripts/common.sh@355 -- # echo 2 00:03:18.646 12:38:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:18.646 12:38:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:18.646 12:38:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:18.646 12:38:27 -- scripts/common.sh@368 -- # return 0 00:03:18.646 12:38:27 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.646 12:38:27 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:18.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.646 --rc genhtml_branch_coverage=1 00:03:18.646 --rc genhtml_function_coverage=1 00:03:18.646 --rc genhtml_legend=1 00:03:18.646 --rc geninfo_all_blocks=1 00:03:18.646 --rc geninfo_unexecuted_blocks=1 00:03:18.646 00:03:18.646 ' 00:03:18.646 12:38:27 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:18.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.646 --rc genhtml_branch_coverage=1 00:03:18.646 --rc genhtml_function_coverage=1 00:03:18.646 --rc genhtml_legend=1 00:03:18.646 --rc geninfo_all_blocks=1 00:03:18.646 --rc geninfo_unexecuted_blocks=1 00:03:18.646 00:03:18.646 ' 00:03:18.646 12:38:27 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:18.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.646 --rc genhtml_branch_coverage=1 00:03:18.646 --rc genhtml_function_coverage=1 00:03:18.646 --rc genhtml_legend=1 00:03:18.646 --rc geninfo_all_blocks=1 00:03:18.646 --rc geninfo_unexecuted_blocks=1 00:03:18.646 00:03:18.646 ' 00:03:18.646 12:38:27 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:18.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.646 --rc genhtml_branch_coverage=1 00:03:18.646 --rc genhtml_function_coverage=1 00:03:18.646 --rc genhtml_legend=1 00:03:18.646 --rc geninfo_all_blocks=1 00:03:18.646 --rc geninfo_unexecuted_blocks=1 00:03:18.646 00:03:18.646 ' 00:03:18.646 12:38:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:18.646 12:38:27 -- nvmf/common.sh@7 -- # uname -s 00:03:18.646 12:38:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:18.646 12:38:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:18.646 12:38:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:18.646 12:38:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:18.646 12:38:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:18.646 12:38:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:18.646 12:38:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:18.646 12:38:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:18.646 12:38:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:18.646 12:38:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:18.646 12:38:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:03:18.646 12:38:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:03:18.646 12:38:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:18.646 12:38:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:18.646 12:38:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:18.646 12:38:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:18.646 12:38:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:18.646 12:38:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:18.646 12:38:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:18.646 12:38:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:18.646 12:38:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:18.646 12:38:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.646 12:38:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.646 12:38:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.646 12:38:27 -- paths/export.sh@5 -- # export PATH 00:03:18.646 12:38:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.646 12:38:27 -- nvmf/common.sh@51 -- # : 0 00:03:18.646 12:38:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:18.646 12:38:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:18.646 12:38:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:18.646 12:38:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:18.646 12:38:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:18.646 12:38:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:18.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:18.646 12:38:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:18.646 12:38:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:18.646 12:38:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:18.646 12:38:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:18.646 12:38:27 -- spdk/autotest.sh@32 -- # uname -s 00:03:18.646 12:38:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:18.646 12:38:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:18.646 12:38:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:18.646 12:38:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:18.646 12:38:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:18.646 12:38:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:18.905 12:38:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:18.905 12:38:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:18.905 12:38:27 -- spdk/autotest.sh@48 -- # udevadm_pid=54401 00:03:18.905 12:38:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:18.905 12:38:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:18.905 12:38:27 -- pm/common@17 -- # local monitor 00:03:18.905 12:38:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.905 12:38:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.905 12:38:27 -- pm/common@25 -- # sleep 1 00:03:18.905 12:38:27 -- pm/common@21 -- # date +%s 00:03:18.905 12:38:27 -- pm/common@21 -- # date +%s 00:03:18.905 12:38:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731674307 00:03:18.905 12:38:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731674307 00:03:18.905 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731674307_collect-cpu-load.pm.log 00:03:18.905 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731674307_collect-vmstat.pm.log 00:03:19.841 12:38:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:19.841 12:38:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:19.841 12:38:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:19.841 12:38:28 -- common/autotest_common.sh@10 -- # set +x 00:03:19.841 12:38:28 -- spdk/autotest.sh@59 -- # create_test_list 00:03:19.841 12:38:28 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:19.841 12:38:28 -- common/autotest_common.sh@10 -- # set +x 00:03:19.841 12:38:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:19.841 12:38:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:19.841 12:38:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:19.841 12:38:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:19.841 12:38:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:19.841 12:38:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:19.841 12:38:28 -- common/autotest_common.sh@1457 -- # uname 00:03:19.841 12:38:28 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:19.841 12:38:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:19.841 12:38:28 -- common/autotest_common.sh@1477 -- # uname 00:03:19.841 12:38:28 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:19.841 12:38:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:19.841 12:38:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:20.100 lcov: LCOV version 1.15 00:03:20.100 12:38:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:38.213 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.213 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.084 12:39:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.084 12:39:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.084 12:39:01 -- common/autotest_common.sh@10 -- # set +x 00:03:53.084 12:39:01 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.084 12:39:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.601 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:53.601 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:53.601 12:39:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:53.601 12:39:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:53.601 12:39:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:53.601 12:39:02 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:53.601 12:39:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.601 12:39:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:53.601 12:39:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:53.601 12:39:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.601 12:39:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.601 12:39:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.601 12:39:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:53.601 12:39:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:53.601 12:39:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.601 12:39:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.601 12:39:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.601 12:39:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:53.601 12:39:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:53.601 12:39:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.601 12:39:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.601 12:39:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.601 12:39:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:53.601 12:39:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:53.601 12:39:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.601 12:39:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.601 12:39:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:53.601 12:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.601 12:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.601 12:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:53.601 12:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:53.601 12:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.601 No valid GPT data, bailing 00:03:53.601 12:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.601 12:39:02 -- scripts/common.sh@394 -- # pt= 00:03:53.601 12:39:02 -- scripts/common.sh@395 -- # return 1 00:03:53.601 12:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.601 1+0 records in 00:03:53.601 1+0 records out 00:03:53.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534483 s, 196 MB/s 00:03:53.601 12:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.601 12:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.601 12:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:53.601 12:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:53.601 12:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:53.601 No valid GPT data, bailing 00:03:53.601 12:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.601 12:39:02 -- scripts/common.sh@394 -- # pt= 00:03:53.601 12:39:02 -- scripts/common.sh@395 -- # return 1 00:03:53.601 12:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:53.601 1+0 records in 00:03:53.601 1+0 records out 00:03:53.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463702 s, 226 MB/s 00:03:53.601 12:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.601 12:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.601 12:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:53.601 12:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:53.601 12:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:53.859 No valid GPT data, bailing 00:03:53.859 12:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:53.859 12:39:02 -- scripts/common.sh@394 -- # pt= 00:03:53.859 12:39:02 -- scripts/common.sh@395 -- # return 1 00:03:53.859 12:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:53.859 1+0 records in 00:03:53.859 1+0 records out 00:03:53.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424426 s, 247 MB/s 00:03:53.859 12:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.859 12:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.859 12:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:53.859 12:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:53.859 12:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:53.859 No valid GPT data, bailing 00:03:53.859 12:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:53.859 12:39:02 -- scripts/common.sh@394 -- # pt= 00:03:53.859 12:39:02 -- scripts/common.sh@395 -- # return 1 00:03:53.859 12:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:53.860 1+0 records in 00:03:53.860 1+0 records out 00:03:53.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00323068 s, 325 MB/s 00:03:53.860 12:39:02 -- spdk/autotest.sh@105 -- # sync 00:03:53.860 12:39:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.860 12:39:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.860 12:39:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.390 12:39:04 -- spdk/autotest.sh@111 -- # uname -s 00:03:56.390 12:39:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:56.390 12:39:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:56.390 12:39:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:56.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.648 Hugepages 00:03:56.648 node hugesize free / total 00:03:56.648 node0 1048576kB 0 / 0 00:03:56.648 node0 2048kB 0 / 0 00:03:56.648 00:03:56.648 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.648 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:56.648 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:56.906 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:56.906 12:39:05 -- spdk/autotest.sh@117 -- # uname -s 00:03:56.906 12:39:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:56.906 12:39:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:56.906 12:39:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.731 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.731 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.731 12:39:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:58.666 12:39:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:58.666 12:39:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:58.666 12:39:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:58.666 12:39:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:58.666 12:39:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:58.666 12:39:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:58.666 12:39:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.666 12:39:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:58.666 12:39:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:58.666 12:39:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:58.666 12:39:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:58.666 12:39:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.233 Waiting for block devices as requested 00:03:59.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:59.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:59.233 12:39:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.233 12:39:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:59.233 12:39:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:59.233 12:39:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:59.233 12:39:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:59.233 12:39:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:59.233 12:39:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:59.233 12:39:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:59.233 12:39:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:59.233 12:39:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:59.233 12:39:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.233 12:39:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:59.233 12:39:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.233 12:39:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:59.233 12:39:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.233 12:39:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.233 12:39:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:59.233 12:39:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.233 12:39:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.491 12:39:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.491 12:39:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.491 12:39:07 -- common/autotest_common.sh@1543 -- # continue 00:03:59.491 12:39:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.491 12:39:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:59.491 12:39:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:59.491 12:39:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:59.491 12:39:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:59.491 12:39:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:59.491 12:39:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:59.491 12:39:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:59.491 12:39:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:59.491 12:39:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:59.491 12:39:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:59.491 12:39:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.491 12:39:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.491 12:39:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:59.491 12:39:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.491 12:39:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.491 12:39:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:59.491 12:39:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.491 12:39:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.491 12:39:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.491 12:39:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.491 12:39:07 -- common/autotest_common.sh@1543 -- # continue 00:03:59.491 12:39:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:59.491 12:39:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:59.491 12:39:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.491 12:39:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:59.491 12:39:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.491 12:39:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.491 12:39:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.059 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.318 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.318 12:39:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:00.318 12:39:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.318 12:39:08 -- common/autotest_common.sh@10 -- # set +x 00:04:00.318 12:39:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:00.318 12:39:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:00.318 12:39:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:00.318 12:39:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:00.318 12:39:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:00.318 12:39:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:00.318 12:39:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:00.318 12:39:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:00.318 12:39:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:00.318 12:39:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:00.318 12:39:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.318 12:39:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:00.318 12:39:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:00.318 12:39:08 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:00.318 12:39:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:00.318 12:39:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:00.318 12:39:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:00.318 12:39:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:00.318 12:39:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.318 12:39:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:00.318 12:39:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:00.318 12:39:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:00.318 12:39:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.318 12:39:08 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:00.318 12:39:08 -- common/autotest_common.sh@1572 -- # return 0 00:04:00.318 12:39:08 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:00.318 12:39:08 -- common/autotest_common.sh@1580 -- # return 0 00:04:00.318 12:39:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:00.318 12:39:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:00.318 12:39:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:00.318 12:39:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:00.318 12:39:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:00.318 12:39:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.318 12:39:08 -- common/autotest_common.sh@10 -- # set +x 00:04:00.318 12:39:08 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:00.318 12:39:08 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:00.318 12:39:08 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:00.318 12:39:08 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:00.318 12:39:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.318 12:39:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.318 12:39:08 -- common/autotest_common.sh@10 -- # set +x 00:04:00.577 ************************************ 00:04:00.577 START TEST env 00:04:00.577 ************************************ 00:04:00.577 12:39:08 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:00.577 * Looking for test storage... 00:04:00.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.577 12:39:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.577 12:39:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.577 12:39:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.577 12:39:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.577 12:39:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.577 12:39:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.577 12:39:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.577 12:39:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.577 12:39:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.577 12:39:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.577 12:39:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.577 12:39:09 env -- scripts/common.sh@344 -- # case "$op" in 00:04:00.577 12:39:09 env -- scripts/common.sh@345 -- # : 1 00:04:00.577 12:39:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.577 12:39:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.577 12:39:09 env -- scripts/common.sh@365 -- # decimal 1 00:04:00.577 12:39:09 env -- scripts/common.sh@353 -- # local d=1 00:04:00.577 12:39:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.577 12:39:09 env -- scripts/common.sh@355 -- # echo 1 00:04:00.577 12:39:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.577 12:39:09 env -- scripts/common.sh@366 -- # decimal 2 00:04:00.577 12:39:09 env -- scripts/common.sh@353 -- # local d=2 00:04:00.577 12:39:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.577 12:39:09 env -- scripts/common.sh@355 -- # echo 2 00:04:00.577 12:39:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.577 12:39:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.577 12:39:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.577 12:39:09 env -- scripts/common.sh@368 -- # return 0 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.577 --rc genhtml_branch_coverage=1 00:04:00.577 --rc genhtml_function_coverage=1 00:04:00.577 --rc genhtml_legend=1 00:04:00.577 --rc geninfo_all_blocks=1 00:04:00.577 --rc geninfo_unexecuted_blocks=1 00:04:00.577 00:04:00.577 ' 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.577 --rc genhtml_branch_coverage=1 00:04:00.577 --rc genhtml_function_coverage=1 00:04:00.577 --rc genhtml_legend=1 00:04:00.577 --rc geninfo_all_blocks=1 00:04:00.577 --rc geninfo_unexecuted_blocks=1 00:04:00.577 00:04:00.577 ' 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.577 --rc genhtml_branch_coverage=1 00:04:00.577 --rc genhtml_function_coverage=1 00:04:00.577 --rc genhtml_legend=1 00:04:00.577 --rc geninfo_all_blocks=1 00:04:00.577 --rc geninfo_unexecuted_blocks=1 00:04:00.577 00:04:00.577 ' 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.577 --rc genhtml_branch_coverage=1 00:04:00.577 --rc genhtml_function_coverage=1 00:04:00.577 --rc genhtml_legend=1 00:04:00.577 --rc geninfo_all_blocks=1 00:04:00.577 --rc geninfo_unexecuted_blocks=1 00:04:00.577 00:04:00.577 ' 00:04:00.577 12:39:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.577 12:39:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.577 12:39:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.577 ************************************ 00:04:00.577 START TEST env_memory 00:04:00.577 ************************************ 00:04:00.577 12:39:09 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:00.577 00:04:00.577 00:04:00.577 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.577 http://cunit.sourceforge.net/ 00:04:00.577 00:04:00.577 00:04:00.577 Suite: memory 00:04:00.577 Test: alloc and free memory map ...[2024-11-15 12:39:09.241491] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.836 passed 00:04:00.836 Test: mem map translation ...[2024-11-15 12:39:09.272558] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.836 [2024-11-15 12:39:09.272735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.836 [2024-11-15 12:39:09.272796] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.836 [2024-11-15 12:39:09.272807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.836 passed 00:04:00.836 Test: mem map registration ...[2024-11-15 12:39:09.336995] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:00.836 [2024-11-15 12:39:09.337182] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:00.836 passed 00:04:00.836 Test: mem map adjacent registrations ...passed 00:04:00.836 00:04:00.836 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.836 suites 1 1 n/a 0 0 00:04:00.836 tests 4 4 4 0 0 00:04:00.836 asserts 152 152 152 0 n/a 00:04:00.836 00:04:00.836 Elapsed time = 0.213 seconds 00:04:00.836 00:04:00.836 real 0m0.232s 00:04:00.836 user 0m0.213s 00:04:00.836 sys 0m0.012s 00:04:00.836 12:39:09 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.836 ************************************ 00:04:00.836 END TEST env_memory 00:04:00.836 ************************************ 00:04:00.836 12:39:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:00.836 12:39:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:00.836 12:39:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.836 12:39:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.836 12:39:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.836 ************************************ 00:04:00.836 START TEST env_vtophys 00:04:00.836 ************************************ 00:04:00.836 12:39:09 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:00.836 EAL: lib.eal log level changed from notice to debug 00:04:00.836 EAL: Detected lcore 0 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 1 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 2 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 3 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 4 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 5 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 6 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 7 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 8 as core 0 on socket 0 00:04:00.836 EAL: Detected lcore 9 as core 0 on socket 0 00:04:01.096 EAL: Maximum logical cores by configuration: 128 00:04:01.096 EAL: Detected CPU lcores: 10 00:04:01.096 EAL: Detected NUMA nodes: 1 00:04:01.096 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:01.096 EAL: Detected shared linkage of DPDK 00:04:01.096 EAL: No shared files mode enabled, IPC will be disabled 00:04:01.096 EAL: Selected IOVA mode 'PA' 00:04:01.096 EAL: Probing VFIO support... 00:04:01.096 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:01.096 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:01.096 EAL: Ask a virtual area of 0x2e000 bytes 00:04:01.096 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:01.096 EAL: Setting up physically contiguous memory... 00:04:01.096 EAL: Setting maximum number of open files to 524288 00:04:01.096 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:01.096 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:01.096 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.096 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:01.096 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.096 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.096 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:01.096 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:01.096 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.096 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:01.096 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.096 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.096 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:01.096 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:01.096 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.096 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:01.096 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.096 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.096 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:01.096 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:01.096 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.096 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:01.096 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.096 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.096 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:01.096 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:01.096 EAL: Hugepages will be freed exactly as allocated. 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: TSC frequency is ~2200000 KHz 00:04:01.096 EAL: Main lcore 0 is ready (tid=7f85db698a00;cpuset=[0]) 00:04:01.096 EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.096 EAL: Restoring previous memory policy: 0 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was expanded by 2MB 00:04:01.096 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:01.096 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:01.096 EAL: Mem event callback 'spdk:(nil)' registered 00:04:01.096 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:01.096 00:04:01.096 00:04:01.096 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.096 http://cunit.sourceforge.net/ 00:04:01.096 00:04:01.096 00:04:01.096 Suite: components_suite 00:04:01.096 Test: vtophys_malloc_test ...passed 00:04:01.096 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.096 EAL: Restoring previous memory policy: 4 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was expanded by 4MB 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was shrunk by 4MB 00:04:01.096 EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.096 EAL: Restoring previous memory policy: 4 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was expanded by 6MB 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was shrunk by 6MB 00:04:01.096 EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.096 EAL: Restoring previous memory policy: 4 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was expanded by 10MB 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was shrunk by 10MB 00:04:01.096 EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.096 EAL: Restoring previous memory policy: 4 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was expanded by 18MB 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was shrunk by 18MB 00:04:01.096 EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.096 EAL: Restoring previous memory policy: 4 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was expanded by 34MB 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was shrunk by 34MB 00:04:01.096 EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.096 EAL: Restoring previous memory policy: 4 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was expanded by 66MB 00:04:01.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.096 EAL: request: mp_malloc_sync 00:04:01.096 EAL: No shared files mode enabled, IPC is disabled 00:04:01.096 EAL: Heap on socket 0 was shrunk by 66MB 00:04:01.096 EAL: Trying to obtain current memory policy. 00:04:01.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.097 EAL: Restoring previous memory policy: 4 00:04:01.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.097 EAL: request: mp_malloc_sync 00:04:01.097 EAL: No shared files mode enabled, IPC is disabled 00:04:01.097 EAL: Heap on socket 0 was expanded by 130MB 00:04:01.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.097 EAL: request: mp_malloc_sync 00:04:01.097 EAL: No shared files mode enabled, IPC is disabled 00:04:01.097 EAL: Heap on socket 0 was shrunk by 130MB 00:04:01.097 EAL: Trying to obtain current memory policy. 00:04:01.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.356 EAL: Restoring previous memory policy: 4 00:04:01.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.356 EAL: request: mp_malloc_sync 00:04:01.356 EAL: No shared files mode enabled, IPC is disabled 00:04:01.356 EAL: Heap on socket 0 was expanded by 258MB 00:04:01.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.356 EAL: request: mp_malloc_sync 00:04:01.356 EAL: No shared files mode enabled, IPC is disabled 00:04:01.356 EAL: Heap on socket 0 was shrunk by 258MB 00:04:01.356 EAL: Trying to obtain current memory policy. 00:04:01.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.356 EAL: Restoring previous memory policy: 4 00:04:01.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.356 EAL: request: mp_malloc_sync 00:04:01.356 EAL: No shared files mode enabled, IPC is disabled 00:04:01.356 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.356 EAL: request: mp_malloc_sync 00:04:01.356 EAL: No shared files mode enabled, IPC is disabled 00:04:01.356 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.356 EAL: Trying to obtain current memory policy. 00:04:01.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.614 EAL: Restoring previous memory policy: 4 00:04:01.614 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.614 EAL: request: mp_malloc_sync 00:04:01.614 EAL: No shared files mode enabled, IPC is disabled 00:04:01.614 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.614 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.873 passed 00:04:01.873 00:04:01.873 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.873 suites 1 1 n/a 0 0 00:04:01.873 tests 2 2 2 0 0 00:04:01.873 asserts 5358 5358 5358 0 n/a 00:04:01.873 00:04:01.873 Elapsed time = 0.696 seconds 00:04:01.873 EAL: request: mp_malloc_sync 00:04:01.873 EAL: No shared files mode enabled, IPC is disabled 00:04:01.873 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.873 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.873 EAL: request: mp_malloc_sync 00:04:01.873 EAL: No shared files mode enabled, IPC is disabled 00:04:01.873 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.873 EAL: No shared files mode enabled, IPC is disabled 00:04:01.873 EAL: No shared files mode enabled, IPC is disabled 00:04:01.873 EAL: No shared files mode enabled, IPC is disabled 00:04:01.873 00:04:01.873 real 0m0.906s 00:04:01.873 user 0m0.471s 00:04:01.873 sys 0m0.304s 00:04:01.873 12:39:10 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.873 ************************************ 00:04:01.873 END TEST env_vtophys 00:04:01.873 ************************************ 00:04:01.873 12:39:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.873 12:39:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.873 12:39:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.873 12:39:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.873 12:39:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.873 ************************************ 00:04:01.873 START TEST env_pci 00:04:01.873 ************************************ 00:04:01.873 12:39:10 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.873 00:04:01.873 00:04:01.873 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.873 http://cunit.sourceforge.net/ 00:04:01.873 00:04:01.873 00:04:01.873 Suite: pci 00:04:01.873 Test: pci_hook ...[2024-11-15 12:39:10.452189] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56645 has claimed it 00:04:01.873 passed 00:04:01.873 00:04:01.873 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.873 suites 1 1 n/a 0 0 00:04:01.873 tests 1 1 1 0 0 00:04:01.873 asserts 25 25 25 0 n/a 00:04:01.873 00:04:01.873 Elapsed time = 0.002 seconds 00:04:01.873 EAL: Cannot find device (10000:00:01.0) 00:04:01.873 EAL: Failed to attach device on primary process 00:04:01.873 00:04:01.873 real 0m0.023s 00:04:01.873 user 0m0.012s 00:04:01.873 sys 0m0.010s 00:04:01.873 12:39:10 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.873 ************************************ 00:04:01.873 END TEST env_pci 00:04:01.873 ************************************ 00:04:01.873 12:39:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.873 12:39:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.873 12:39:10 env -- env/env.sh@15 -- # uname 00:04:01.873 12:39:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.873 12:39:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.873 12:39:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.873 12:39:10 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:01.873 12:39:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.873 12:39:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.873 ************************************ 00:04:01.873 START TEST env_dpdk_post_init 00:04:01.873 ************************************ 00:04:01.873 12:39:10 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:02.131 EAL: Detected CPU lcores: 10 00:04:02.131 EAL: Detected NUMA nodes: 1 00:04:02.131 EAL: Detected shared linkage of DPDK 00:04:02.131 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.131 EAL: Selected IOVA mode 'PA' 00:04:02.131 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.131 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:02.131 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:02.131 Starting DPDK initialization... 00:04:02.131 Starting SPDK post initialization... 00:04:02.131 SPDK NVMe probe 00:04:02.131 Attaching to 0000:00:10.0 00:04:02.131 Attaching to 0000:00:11.0 00:04:02.131 Attached to 0000:00:10.0 00:04:02.131 Attached to 0000:00:11.0 00:04:02.131 Cleaning up... 00:04:02.131 00:04:02.131 real 0m0.193s 00:04:02.131 user 0m0.053s 00:04:02.131 sys 0m0.040s 00:04:02.131 12:39:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.131 ************************************ 00:04:02.131 END TEST env_dpdk_post_init 00:04:02.131 ************************************ 00:04:02.131 12:39:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.131 12:39:10 env -- env/env.sh@26 -- # uname 00:04:02.131 12:39:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:02.131 12:39:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.131 12:39:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.131 12:39:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.131 12:39:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.131 ************************************ 00:04:02.131 START TEST env_mem_callbacks 00:04:02.131 ************************************ 00:04:02.131 12:39:10 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.131 EAL: Detected CPU lcores: 10 00:04:02.131 EAL: Detected NUMA nodes: 1 00:04:02.131 EAL: Detected shared linkage of DPDK 00:04:02.131 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.131 EAL: Selected IOVA mode 'PA' 00:04:02.390 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.390 00:04:02.390 00:04:02.390 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.390 http://cunit.sourceforge.net/ 00:04:02.390 00:04:02.390 00:04:02.390 Suite: memory 00:04:02.390 Test: test ... 00:04:02.390 register 0x200000200000 2097152 00:04:02.390 malloc 3145728 00:04:02.390 register 0x200000400000 4194304 00:04:02.390 buf 0x200000500000 len 3145728 PASSED 00:04:02.390 malloc 64 00:04:02.390 buf 0x2000004fff40 len 64 PASSED 00:04:02.390 malloc 4194304 00:04:02.390 register 0x200000800000 6291456 00:04:02.390 buf 0x200000a00000 len 4194304 PASSED 00:04:02.390 free 0x200000500000 3145728 00:04:02.390 free 0x2000004fff40 64 00:04:02.390 unregister 0x200000400000 4194304 PASSED 00:04:02.390 free 0x200000a00000 4194304 00:04:02.390 unregister 0x200000800000 6291456 PASSED 00:04:02.390 malloc 8388608 00:04:02.390 register 0x200000400000 10485760 00:04:02.390 buf 0x200000600000 len 8388608 PASSED 00:04:02.390 free 0x200000600000 8388608 00:04:02.390 unregister 0x200000400000 10485760 PASSED 00:04:02.390 passed 00:04:02.390 00:04:02.390 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.390 suites 1 1 n/a 0 0 00:04:02.390 tests 1 1 1 0 0 00:04:02.390 asserts 15 15 15 0 n/a 00:04:02.390 00:04:02.390 Elapsed time = 0.008 seconds 00:04:02.390 00:04:02.390 real 0m0.143s 00:04:02.390 user 0m0.015s 00:04:02.390 sys 0m0.027s 00:04:02.390 12:39:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.390 ************************************ 00:04:02.390 END TEST env_mem_callbacks 00:04:02.390 ************************************ 00:04:02.390 12:39:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:02.390 00:04:02.390 real 0m1.962s 00:04:02.390 user 0m0.965s 00:04:02.390 sys 0m0.633s 00:04:02.390 12:39:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.390 12:39:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.390 ************************************ 00:04:02.390 END TEST env 00:04:02.390 ************************************ 00:04:02.390 12:39:10 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:02.390 12:39:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.390 12:39:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.390 12:39:10 -- common/autotest_common.sh@10 -- # set +x 00:04:02.390 ************************************ 00:04:02.390 START TEST rpc 00:04:02.390 ************************************ 00:04:02.390 12:39:11 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:02.649 * Looking for test storage... 00:04:02.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.649 12:39:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.649 12:39:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.649 12:39:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.649 12:39:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.649 12:39:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.649 12:39:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.649 12:39:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.649 12:39:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.649 12:39:11 rpc -- scripts/common.sh@345 -- # : 1 00:04:02.649 12:39:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.649 12:39:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.649 12:39:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.649 12:39:11 rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.649 12:39:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.649 12:39:11 rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.649 12:39:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.649 12:39:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.649 12:39:11 rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.649 12:39:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.649 12:39:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.649 12:39:11 rpc -- scripts/common.sh@368 -- # return 0 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.649 --rc genhtml_branch_coverage=1 00:04:02.649 --rc genhtml_function_coverage=1 00:04:02.649 --rc genhtml_legend=1 00:04:02.649 --rc geninfo_all_blocks=1 00:04:02.649 --rc geninfo_unexecuted_blocks=1 00:04:02.649 00:04:02.649 ' 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.649 --rc genhtml_branch_coverage=1 00:04:02.649 --rc genhtml_function_coverage=1 00:04:02.649 --rc genhtml_legend=1 00:04:02.649 --rc geninfo_all_blocks=1 00:04:02.649 --rc geninfo_unexecuted_blocks=1 00:04:02.649 00:04:02.649 ' 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.649 --rc genhtml_branch_coverage=1 00:04:02.649 --rc genhtml_function_coverage=1 00:04:02.649 --rc genhtml_legend=1 00:04:02.649 --rc geninfo_all_blocks=1 00:04:02.649 --rc geninfo_unexecuted_blocks=1 00:04:02.649 00:04:02.649 ' 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.649 --rc genhtml_branch_coverage=1 00:04:02.649 --rc genhtml_function_coverage=1 00:04:02.649 --rc genhtml_legend=1 00:04:02.649 --rc geninfo_all_blocks=1 00:04:02.649 --rc geninfo_unexecuted_blocks=1 00:04:02.649 00:04:02.649 ' 00:04:02.649 12:39:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56762 00:04:02.649 12:39:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:02.649 12:39:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.649 12:39:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56762 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 56762 ']' 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.649 12:39:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.649 [2024-11-15 12:39:11.252169] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:02.649 [2024-11-15 12:39:11.252300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56762 ] 00:04:02.908 [2024-11-15 12:39:11.409810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.908 [2024-11-15 12:39:11.448636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:02.908 [2024-11-15 12:39:11.448704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56762' to capture a snapshot of events at runtime. 00:04:02.908 [2024-11-15 12:39:11.448717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:02.908 [2024-11-15 12:39:11.448727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:02.908 [2024-11-15 12:39:11.448735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56762 for offline analysis/debug. 00:04:02.908 [2024-11-15 12:39:11.449144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.908 [2024-11-15 12:39:11.494533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:03.166 12:39:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.166 12:39:11 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:03.166 12:39:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:03.166 12:39:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:03.166 12:39:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:03.166 12:39:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:03.166 12:39:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.166 12:39:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.166 12:39:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.166 ************************************ 00:04:03.166 START TEST rpc_integrity 00:04:03.166 ************************************ 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.166 { 00:04:03.166 "name": "Malloc0", 00:04:03.166 "aliases": [ 00:04:03.166 "b94f9f8c-6d91-4c34-b1fd-b0194adcbde8" 00:04:03.166 ], 00:04:03.166 "product_name": "Malloc disk", 00:04:03.166 "block_size": 512, 00:04:03.166 "num_blocks": 16384, 00:04:03.166 "uuid": "b94f9f8c-6d91-4c34-b1fd-b0194adcbde8", 00:04:03.166 "assigned_rate_limits": { 00:04:03.166 "rw_ios_per_sec": 0, 00:04:03.166 "rw_mbytes_per_sec": 0, 00:04:03.166 "r_mbytes_per_sec": 0, 00:04:03.166 "w_mbytes_per_sec": 0 00:04:03.166 }, 00:04:03.166 "claimed": false, 00:04:03.166 "zoned": false, 00:04:03.166 "supported_io_types": { 00:04:03.166 "read": true, 00:04:03.166 "write": true, 00:04:03.166 "unmap": true, 00:04:03.166 "flush": true, 00:04:03.166 "reset": true, 00:04:03.166 "nvme_admin": false, 00:04:03.166 "nvme_io": false, 00:04:03.166 "nvme_io_md": false, 00:04:03.166 "write_zeroes": true, 00:04:03.166 "zcopy": true, 00:04:03.166 "get_zone_info": false, 00:04:03.166 "zone_management": false, 00:04:03.166 "zone_append": false, 00:04:03.166 "compare": false, 00:04:03.166 "compare_and_write": false, 00:04:03.166 "abort": true, 00:04:03.166 "seek_hole": false, 00:04:03.166 "seek_data": false, 00:04:03.166 "copy": true, 00:04:03.166 "nvme_iov_md": false 00:04:03.166 }, 00:04:03.166 "memory_domains": [ 00:04:03.166 { 00:04:03.166 "dma_device_id": "system", 00:04:03.166 "dma_device_type": 1 00:04:03.166 }, 00:04:03.166 { 00:04:03.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.166 "dma_device_type": 2 00:04:03.166 } 00:04:03.166 ], 00:04:03.166 "driver_specific": {} 00:04:03.166 } 00:04:03.166 ]' 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.166 [2024-11-15 12:39:11.802687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:03.166 [2024-11-15 12:39:11.802736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.166 [2024-11-15 12:39:11.802755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa4f10 00:04:03.166 [2024-11-15 12:39:11.802765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.166 [2024-11-15 12:39:11.804308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.166 [2024-11-15 12:39:11.804351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.166 Passthru0 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.166 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.166 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.424 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.424 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.424 { 00:04:03.424 "name": "Malloc0", 00:04:03.424 "aliases": [ 00:04:03.424 "b94f9f8c-6d91-4c34-b1fd-b0194adcbde8" 00:04:03.424 ], 00:04:03.424 "product_name": "Malloc disk", 00:04:03.424 "block_size": 512, 00:04:03.424 "num_blocks": 16384, 00:04:03.424 "uuid": "b94f9f8c-6d91-4c34-b1fd-b0194adcbde8", 00:04:03.424 "assigned_rate_limits": { 00:04:03.424 "rw_ios_per_sec": 0, 00:04:03.424 "rw_mbytes_per_sec": 0, 00:04:03.424 "r_mbytes_per_sec": 0, 00:04:03.424 "w_mbytes_per_sec": 0 00:04:03.424 }, 00:04:03.424 "claimed": true, 00:04:03.424 "claim_type": "exclusive_write", 00:04:03.424 "zoned": false, 00:04:03.424 "supported_io_types": { 00:04:03.424 "read": true, 00:04:03.424 "write": true, 00:04:03.424 "unmap": true, 00:04:03.424 "flush": true, 00:04:03.424 "reset": true, 00:04:03.424 "nvme_admin": false, 00:04:03.424 "nvme_io": false, 00:04:03.424 "nvme_io_md": false, 00:04:03.424 "write_zeroes": true, 00:04:03.424 "zcopy": true, 00:04:03.424 "get_zone_info": false, 00:04:03.424 "zone_management": false, 00:04:03.424 "zone_append": false, 00:04:03.424 "compare": false, 00:04:03.424 "compare_and_write": false, 00:04:03.424 "abort": true, 00:04:03.424 "seek_hole": false, 00:04:03.424 "seek_data": false, 00:04:03.424 "copy": true, 00:04:03.424 "nvme_iov_md": false 00:04:03.424 }, 00:04:03.424 "memory_domains": [ 00:04:03.424 { 00:04:03.424 "dma_device_id": "system", 00:04:03.424 "dma_device_type": 1 00:04:03.425 }, 00:04:03.425 { 00:04:03.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.425 "dma_device_type": 2 00:04:03.425 } 00:04:03.425 ], 00:04:03.425 "driver_specific": {} 00:04:03.425 }, 00:04:03.425 { 00:04:03.425 "name": "Passthru0", 00:04:03.425 "aliases": [ 00:04:03.425 "d5a5a53e-1550-5b82-a72e-36570acc8b03" 00:04:03.425 ], 00:04:03.425 "product_name": "passthru", 00:04:03.425 "block_size": 512, 00:04:03.425 "num_blocks": 16384, 00:04:03.425 "uuid": "d5a5a53e-1550-5b82-a72e-36570acc8b03", 00:04:03.425 "assigned_rate_limits": { 00:04:03.425 "rw_ios_per_sec": 0, 00:04:03.425 "rw_mbytes_per_sec": 0, 00:04:03.425 "r_mbytes_per_sec": 0, 00:04:03.425 "w_mbytes_per_sec": 0 00:04:03.425 }, 00:04:03.425 "claimed": false, 00:04:03.425 "zoned": false, 00:04:03.425 "supported_io_types": { 00:04:03.425 "read": true, 00:04:03.425 "write": true, 00:04:03.425 "unmap": true, 00:04:03.425 "flush": true, 00:04:03.425 "reset": true, 00:04:03.425 "nvme_admin": false, 00:04:03.425 "nvme_io": false, 00:04:03.425 "nvme_io_md": false, 00:04:03.425 "write_zeroes": true, 00:04:03.425 "zcopy": true, 00:04:03.425 "get_zone_info": false, 00:04:03.425 "zone_management": false, 00:04:03.425 "zone_append": false, 00:04:03.425 "compare": false, 00:04:03.425 "compare_and_write": false, 00:04:03.425 "abort": true, 00:04:03.425 "seek_hole": false, 00:04:03.425 "seek_data": false, 00:04:03.425 "copy": true, 00:04:03.425 "nvme_iov_md": false 00:04:03.425 }, 00:04:03.425 "memory_domains": [ 00:04:03.425 { 00:04:03.425 "dma_device_id": "system", 00:04:03.425 "dma_device_type": 1 00:04:03.425 }, 00:04:03.425 { 00:04:03.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.425 "dma_device_type": 2 00:04:03.425 } 00:04:03.425 ], 00:04:03.425 "driver_specific": { 00:04:03.425 "passthru": { 00:04:03.425 "name": "Passthru0", 00:04:03.425 "base_bdev_name": "Malloc0" 00:04:03.425 } 00:04:03.425 } 00:04:03.425 } 00:04:03.425 ]' 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.425 12:39:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.425 00:04:03.425 real 0m0.314s 00:04:03.425 user 0m0.205s 00:04:03.425 sys 0m0.040s 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.425 ************************************ 00:04:03.425 END TEST rpc_integrity 00:04:03.425 12:39:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 ************************************ 00:04:03.425 12:39:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:03.425 12:39:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.425 12:39:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.425 12:39:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 ************************************ 00:04:03.425 START TEST rpc_plugins 00:04:03.425 ************************************ 00:04:03.425 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:03.425 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:03.425 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.425 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.425 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:03.425 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:03.425 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.425 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.425 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:03.425 { 00:04:03.425 "name": "Malloc1", 00:04:03.425 "aliases": [ 00:04:03.425 "ba9341d3-a9af-4290-ae54-9dbb911e5ed1" 00:04:03.425 ], 00:04:03.425 "product_name": "Malloc disk", 00:04:03.425 "block_size": 4096, 00:04:03.425 "num_blocks": 256, 00:04:03.425 "uuid": "ba9341d3-a9af-4290-ae54-9dbb911e5ed1", 00:04:03.425 "assigned_rate_limits": { 00:04:03.425 "rw_ios_per_sec": 0, 00:04:03.425 "rw_mbytes_per_sec": 0, 00:04:03.425 "r_mbytes_per_sec": 0, 00:04:03.425 "w_mbytes_per_sec": 0 00:04:03.425 }, 00:04:03.425 "claimed": false, 00:04:03.425 "zoned": false, 00:04:03.425 "supported_io_types": { 00:04:03.425 "read": true, 00:04:03.425 "write": true, 00:04:03.425 "unmap": true, 00:04:03.425 "flush": true, 00:04:03.425 "reset": true, 00:04:03.425 "nvme_admin": false, 00:04:03.425 "nvme_io": false, 00:04:03.425 "nvme_io_md": false, 00:04:03.425 "write_zeroes": true, 00:04:03.425 "zcopy": true, 00:04:03.425 "get_zone_info": false, 00:04:03.425 "zone_management": false, 00:04:03.425 "zone_append": false, 00:04:03.425 "compare": false, 00:04:03.425 "compare_and_write": false, 00:04:03.425 "abort": true, 00:04:03.425 "seek_hole": false, 00:04:03.425 "seek_data": false, 00:04:03.425 "copy": true, 00:04:03.425 "nvme_iov_md": false 00:04:03.425 }, 00:04:03.425 "memory_domains": [ 00:04:03.425 { 00:04:03.425 "dma_device_id": "system", 00:04:03.425 "dma_device_type": 1 00:04:03.425 }, 00:04:03.425 { 00:04:03.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.425 "dma_device_type": 2 00:04:03.425 } 00:04:03.425 ], 00:04:03.425 "driver_specific": {} 00:04:03.425 } 00:04:03.425 ]' 00:04:03.425 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.684 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.684 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.684 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.684 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.684 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.684 12:39:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.684 00:04:03.684 real 0m0.160s 00:04:03.684 user 0m0.104s 00:04:03.684 sys 0m0.020s 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.684 ************************************ 00:04:03.684 END TEST rpc_plugins 00:04:03.684 ************************************ 00:04:03.684 12:39:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.684 12:39:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.684 12:39:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.684 12:39:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.684 12:39:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.684 ************************************ 00:04:03.684 START TEST rpc_trace_cmd_test 00:04:03.684 ************************************ 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.684 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56762", 00:04:03.684 "tpoint_group_mask": "0x8", 00:04:03.684 "iscsi_conn": { 00:04:03.684 "mask": "0x2", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "scsi": { 00:04:03.684 "mask": "0x4", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "bdev": { 00:04:03.684 "mask": "0x8", 00:04:03.684 "tpoint_mask": "0xffffffffffffffff" 00:04:03.684 }, 00:04:03.684 "nvmf_rdma": { 00:04:03.684 "mask": "0x10", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "nvmf_tcp": { 00:04:03.684 "mask": "0x20", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "ftl": { 00:04:03.684 "mask": "0x40", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "blobfs": { 00:04:03.684 "mask": "0x80", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "dsa": { 00:04:03.684 "mask": "0x200", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "thread": { 00:04:03.684 "mask": "0x400", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "nvme_pcie": { 00:04:03.684 "mask": "0x800", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "iaa": { 00:04:03.684 "mask": "0x1000", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "nvme_tcp": { 00:04:03.684 "mask": "0x2000", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "bdev_nvme": { 00:04:03.684 "mask": "0x4000", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "sock": { 00:04:03.684 "mask": "0x8000", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "blob": { 00:04:03.684 "mask": "0x10000", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "bdev_raid": { 00:04:03.684 "mask": "0x20000", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 }, 00:04:03.684 "scheduler": { 00:04:03.684 "mask": "0x40000", 00:04:03.684 "tpoint_mask": "0x0" 00:04:03.684 } 00:04:03.684 }' 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:03.684 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.943 00:04:03.943 real 0m0.269s 00:04:03.943 user 0m0.236s 00:04:03.943 sys 0m0.025s 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.943 ************************************ 00:04:03.943 END TEST rpc_trace_cmd_test 00:04:03.943 ************************************ 00:04:03.943 12:39:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.943 12:39:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:03.943 12:39:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:03.943 12:39:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:03.943 12:39:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.943 12:39:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.943 12:39:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.943 ************************************ 00:04:03.943 START TEST rpc_daemon_integrity 00:04:03.943 ************************************ 00:04:03.943 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:03.943 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.943 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.943 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.943 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.943 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.943 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.201 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.201 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.201 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.201 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.202 { 00:04:04.202 "name": "Malloc2", 00:04:04.202 "aliases": [ 00:04:04.202 "155e6833-ab6f-475c-aa11-a8f71ae3321e" 00:04:04.202 ], 00:04:04.202 "product_name": "Malloc disk", 00:04:04.202 "block_size": 512, 00:04:04.202 "num_blocks": 16384, 00:04:04.202 "uuid": "155e6833-ab6f-475c-aa11-a8f71ae3321e", 00:04:04.202 "assigned_rate_limits": { 00:04:04.202 "rw_ios_per_sec": 0, 00:04:04.202 "rw_mbytes_per_sec": 0, 00:04:04.202 "r_mbytes_per_sec": 0, 00:04:04.202 "w_mbytes_per_sec": 0 00:04:04.202 }, 00:04:04.202 "claimed": false, 00:04:04.202 "zoned": false, 00:04:04.202 "supported_io_types": { 00:04:04.202 "read": true, 00:04:04.202 "write": true, 00:04:04.202 "unmap": true, 00:04:04.202 "flush": true, 00:04:04.202 "reset": true, 00:04:04.202 "nvme_admin": false, 00:04:04.202 "nvme_io": false, 00:04:04.202 "nvme_io_md": false, 00:04:04.202 "write_zeroes": true, 00:04:04.202 "zcopy": true, 00:04:04.202 "get_zone_info": false, 00:04:04.202 "zone_management": false, 00:04:04.202 "zone_append": false, 00:04:04.202 "compare": false, 00:04:04.202 "compare_and_write": false, 00:04:04.202 "abort": true, 00:04:04.202 "seek_hole": false, 00:04:04.202 "seek_data": false, 00:04:04.202 "copy": true, 00:04:04.202 "nvme_iov_md": false 00:04:04.202 }, 00:04:04.202 "memory_domains": [ 00:04:04.202 { 00:04:04.202 "dma_device_id": "system", 00:04:04.202 "dma_device_type": 1 00:04:04.202 }, 00:04:04.202 { 00:04:04.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.202 "dma_device_type": 2 00:04:04.202 } 00:04:04.202 ], 00:04:04.202 "driver_specific": {} 00:04:04.202 } 00:04:04.202 ]' 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.202 [2024-11-15 12:39:12.707047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:04.202 [2024-11-15 12:39:12.707088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.202 [2024-11-15 12:39:12.707103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc3f980 00:04:04.202 [2024-11-15 12:39:12.707112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.202 [2024-11-15 12:39:12.708466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.202 [2024-11-15 12:39:12.708521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.202 Passthru0 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.202 { 00:04:04.202 "name": "Malloc2", 00:04:04.202 "aliases": [ 00:04:04.202 "155e6833-ab6f-475c-aa11-a8f71ae3321e" 00:04:04.202 ], 00:04:04.202 "product_name": "Malloc disk", 00:04:04.202 "block_size": 512, 00:04:04.202 "num_blocks": 16384, 00:04:04.202 "uuid": "155e6833-ab6f-475c-aa11-a8f71ae3321e", 00:04:04.202 "assigned_rate_limits": { 00:04:04.202 "rw_ios_per_sec": 0, 00:04:04.202 "rw_mbytes_per_sec": 0, 00:04:04.202 "r_mbytes_per_sec": 0, 00:04:04.202 "w_mbytes_per_sec": 0 00:04:04.202 }, 00:04:04.202 "claimed": true, 00:04:04.202 "claim_type": "exclusive_write", 00:04:04.202 "zoned": false, 00:04:04.202 "supported_io_types": { 00:04:04.202 "read": true, 00:04:04.202 "write": true, 00:04:04.202 "unmap": true, 00:04:04.202 "flush": true, 00:04:04.202 "reset": true, 00:04:04.202 "nvme_admin": false, 00:04:04.202 "nvme_io": false, 00:04:04.202 "nvme_io_md": false, 00:04:04.202 "write_zeroes": true, 00:04:04.202 "zcopy": true, 00:04:04.202 "get_zone_info": false, 00:04:04.202 "zone_management": false, 00:04:04.202 "zone_append": false, 00:04:04.202 "compare": false, 00:04:04.202 "compare_and_write": false, 00:04:04.202 "abort": true, 00:04:04.202 "seek_hole": false, 00:04:04.202 "seek_data": false, 00:04:04.202 "copy": true, 00:04:04.202 "nvme_iov_md": false 00:04:04.202 }, 00:04:04.202 "memory_domains": [ 00:04:04.202 { 00:04:04.202 "dma_device_id": "system", 00:04:04.202 "dma_device_type": 1 00:04:04.202 }, 00:04:04.202 { 00:04:04.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.202 "dma_device_type": 2 00:04:04.202 } 00:04:04.202 ], 00:04:04.202 "driver_specific": {} 00:04:04.202 }, 00:04:04.202 { 00:04:04.202 "name": "Passthru0", 00:04:04.202 "aliases": [ 00:04:04.202 "ad361496-89e3-5289-b85c-fe97ddf211f3" 00:04:04.202 ], 00:04:04.202 "product_name": "passthru", 00:04:04.202 "block_size": 512, 00:04:04.202 "num_blocks": 16384, 00:04:04.202 "uuid": "ad361496-89e3-5289-b85c-fe97ddf211f3", 00:04:04.202 "assigned_rate_limits": { 00:04:04.202 "rw_ios_per_sec": 0, 00:04:04.202 "rw_mbytes_per_sec": 0, 00:04:04.202 "r_mbytes_per_sec": 0, 00:04:04.202 "w_mbytes_per_sec": 0 00:04:04.202 }, 00:04:04.202 "claimed": false, 00:04:04.202 "zoned": false, 00:04:04.202 "supported_io_types": { 00:04:04.202 "read": true, 00:04:04.202 "write": true, 00:04:04.202 "unmap": true, 00:04:04.202 "flush": true, 00:04:04.202 "reset": true, 00:04:04.202 "nvme_admin": false, 00:04:04.202 "nvme_io": false, 00:04:04.202 "nvme_io_md": false, 00:04:04.202 "write_zeroes": true, 00:04:04.202 "zcopy": true, 00:04:04.202 "get_zone_info": false, 00:04:04.202 "zone_management": false, 00:04:04.202 "zone_append": false, 00:04:04.202 "compare": false, 00:04:04.202 "compare_and_write": false, 00:04:04.202 "abort": true, 00:04:04.202 "seek_hole": false, 00:04:04.202 "seek_data": false, 00:04:04.202 "copy": true, 00:04:04.202 "nvme_iov_md": false 00:04:04.202 }, 00:04:04.202 "memory_domains": [ 00:04:04.202 { 00:04:04.202 "dma_device_id": "system", 00:04:04.202 "dma_device_type": 1 00:04:04.202 }, 00:04:04.202 { 00:04:04.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.202 "dma_device_type": 2 00:04:04.202 } 00:04:04.202 ], 00:04:04.202 "driver_specific": { 00:04:04.202 "passthru": { 00:04:04.202 "name": "Passthru0", 00:04:04.202 "base_bdev_name": "Malloc2" 00:04:04.202 } 00:04:04.202 } 00:04:04.202 } 00:04:04.202 ]' 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.202 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.463 12:39:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.463 00:04:04.463 real 0m0.325s 00:04:04.463 user 0m0.206s 00:04:04.463 sys 0m0.047s 00:04:04.463 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.463 ************************************ 00:04:04.463 12:39:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.463 END TEST rpc_daemon_integrity 00:04:04.463 ************************************ 00:04:04.463 12:39:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:04.463 12:39:12 rpc -- rpc/rpc.sh@84 -- # killprocess 56762 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@954 -- # '[' -z 56762 ']' 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@958 -- # kill -0 56762 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@959 -- # uname 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56762 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.463 killing process with pid 56762 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56762' 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@973 -- # kill 56762 00:04:04.463 12:39:12 rpc -- common/autotest_common.sh@978 -- # wait 56762 00:04:04.724 00:04:04.724 real 0m2.177s 00:04:04.724 user 0m2.903s 00:04:04.724 sys 0m0.574s 00:04:04.724 12:39:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.724 ************************************ 00:04:04.724 END TEST rpc 00:04:04.724 ************************************ 00:04:04.724 12:39:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.724 12:39:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.724 12:39:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.724 12:39:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.724 12:39:13 -- common/autotest_common.sh@10 -- # set +x 00:04:04.724 ************************************ 00:04:04.724 START TEST skip_rpc 00:04:04.724 ************************************ 00:04:04.724 12:39:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.724 * Looking for test storage... 00:04:04.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:04.724 12:39:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.724 12:39:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.724 12:39:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.982 12:39:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.982 --rc genhtml_branch_coverage=1 00:04:04.982 --rc genhtml_function_coverage=1 00:04:04.982 --rc genhtml_legend=1 00:04:04.982 --rc geninfo_all_blocks=1 00:04:04.982 --rc geninfo_unexecuted_blocks=1 00:04:04.982 00:04:04.982 ' 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.982 --rc genhtml_branch_coverage=1 00:04:04.982 --rc genhtml_function_coverage=1 00:04:04.982 --rc genhtml_legend=1 00:04:04.982 --rc geninfo_all_blocks=1 00:04:04.982 --rc geninfo_unexecuted_blocks=1 00:04:04.982 00:04:04.982 ' 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.982 --rc genhtml_branch_coverage=1 00:04:04.982 --rc genhtml_function_coverage=1 00:04:04.982 --rc genhtml_legend=1 00:04:04.982 --rc geninfo_all_blocks=1 00:04:04.982 --rc geninfo_unexecuted_blocks=1 00:04:04.982 00:04:04.982 ' 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.982 --rc genhtml_branch_coverage=1 00:04:04.982 --rc genhtml_function_coverage=1 00:04:04.982 --rc genhtml_legend=1 00:04:04.982 --rc geninfo_all_blocks=1 00:04:04.982 --rc geninfo_unexecuted_blocks=1 00:04:04.982 00:04:04.982 ' 00:04:04.982 12:39:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:04.982 12:39:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.982 12:39:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.982 12:39:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.982 ************************************ 00:04:04.982 START TEST skip_rpc 00:04:04.982 ************************************ 00:04:04.982 12:39:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:04.982 12:39:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56961 00:04:04.982 12:39:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.982 12:39:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:04.982 12:39:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:04.983 [2024-11-15 12:39:13.498243] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:04.983 [2024-11-15 12:39:13.498348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56961 ] 00:04:04.983 [2024-11-15 12:39:13.647274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.241 [2024-11-15 12:39:13.677991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.241 [2024-11-15 12:39:13.716642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56961 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56961 ']' 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56961 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56961 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.509 killing process with pid 56961 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56961' 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56961 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56961 00:04:10.509 00:04:10.509 real 0m5.277s 00:04:10.509 user 0m5.013s 00:04:10.509 sys 0m0.180s 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.509 12:39:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.509 ************************************ 00:04:10.509 END TEST skip_rpc 00:04:10.509 ************************************ 00:04:10.509 12:39:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:10.509 12:39:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.509 12:39:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.509 12:39:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.509 ************************************ 00:04:10.509 START TEST skip_rpc_with_json 00:04:10.509 ************************************ 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57042 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57042 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57042 ']' 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.509 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.509 [2024-11-15 12:39:18.831832] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:10.509 [2024-11-15 12:39:18.831929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57042 ] 00:04:10.509 [2024-11-15 12:39:18.978586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.509 [2024-11-15 12:39:19.012967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.509 [2024-11-15 12:39:19.053538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:10.509 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.509 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:10.509 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:10.509 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.509 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.509 [2024-11-15 12:39:19.173341] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:10.769 request: 00:04:10.769 { 00:04:10.769 "trtype": "tcp", 00:04:10.769 "method": "nvmf_get_transports", 00:04:10.769 "req_id": 1 00:04:10.769 } 00:04:10.769 Got JSON-RPC error response 00:04:10.769 response: 00:04:10.769 { 00:04:10.769 "code": -19, 00:04:10.769 "message": "No such device" 00:04:10.769 } 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.769 [2024-11-15 12:39:19.185416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.769 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:10.769 { 00:04:10.769 "subsystems": [ 00:04:10.769 { 00:04:10.769 "subsystem": "fsdev", 00:04:10.769 "config": [ 00:04:10.769 { 00:04:10.769 "method": "fsdev_set_opts", 00:04:10.769 "params": { 00:04:10.769 "fsdev_io_pool_size": 65535, 00:04:10.769 "fsdev_io_cache_size": 256 00:04:10.769 } 00:04:10.769 } 00:04:10.769 ] 00:04:10.769 }, 00:04:10.769 { 00:04:10.769 "subsystem": "keyring", 00:04:10.769 "config": [] 00:04:10.769 }, 00:04:10.769 { 00:04:10.769 "subsystem": "iobuf", 00:04:10.769 "config": [ 00:04:10.769 { 00:04:10.769 "method": "iobuf_set_options", 00:04:10.769 "params": { 00:04:10.769 "small_pool_count": 8192, 00:04:10.769 "large_pool_count": 1024, 00:04:10.769 "small_bufsize": 8192, 00:04:10.769 "large_bufsize": 135168, 00:04:10.769 "enable_numa": false 00:04:10.769 } 00:04:10.769 } 00:04:10.769 ] 00:04:10.769 }, 00:04:10.769 { 00:04:10.769 "subsystem": "sock", 00:04:10.769 "config": [ 00:04:10.769 { 00:04:10.770 "method": "sock_set_default_impl", 00:04:10.770 "params": { 00:04:10.770 "impl_name": "uring" 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "sock_impl_set_options", 00:04:10.770 "params": { 00:04:10.770 "impl_name": "ssl", 00:04:10.770 "recv_buf_size": 4096, 00:04:10.770 "send_buf_size": 4096, 00:04:10.770 "enable_recv_pipe": true, 00:04:10.770 "enable_quickack": false, 00:04:10.770 "enable_placement_id": 0, 00:04:10.770 "enable_zerocopy_send_server": true, 00:04:10.770 "enable_zerocopy_send_client": false, 00:04:10.770 "zerocopy_threshold": 0, 00:04:10.770 "tls_version": 0, 00:04:10.770 "enable_ktls": false 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "sock_impl_set_options", 00:04:10.770 "params": { 00:04:10.770 "impl_name": "posix", 00:04:10.770 "recv_buf_size": 2097152, 00:04:10.770 "send_buf_size": 2097152, 00:04:10.770 "enable_recv_pipe": true, 00:04:10.770 "enable_quickack": false, 00:04:10.770 "enable_placement_id": 0, 00:04:10.770 "enable_zerocopy_send_server": true, 00:04:10.770 "enable_zerocopy_send_client": false, 00:04:10.770 "zerocopy_threshold": 0, 00:04:10.770 "tls_version": 0, 00:04:10.770 "enable_ktls": false 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "sock_impl_set_options", 00:04:10.770 "params": { 00:04:10.770 "impl_name": "uring", 00:04:10.770 "recv_buf_size": 2097152, 00:04:10.770 "send_buf_size": 2097152, 00:04:10.770 "enable_recv_pipe": true, 00:04:10.770 "enable_quickack": false, 00:04:10.770 "enable_placement_id": 0, 00:04:10.770 "enable_zerocopy_send_server": false, 00:04:10.770 "enable_zerocopy_send_client": false, 00:04:10.770 "zerocopy_threshold": 0, 00:04:10.770 "tls_version": 0, 00:04:10.770 "enable_ktls": false 00:04:10.770 } 00:04:10.770 } 00:04:10.770 ] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "vmd", 00:04:10.770 "config": [] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "accel", 00:04:10.770 "config": [ 00:04:10.770 { 00:04:10.770 "method": "accel_set_options", 00:04:10.770 "params": { 00:04:10.770 "small_cache_size": 128, 00:04:10.770 "large_cache_size": 16, 00:04:10.770 "task_count": 2048, 00:04:10.770 "sequence_count": 2048, 00:04:10.770 "buf_count": 2048 00:04:10.770 } 00:04:10.770 } 00:04:10.770 ] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "bdev", 00:04:10.770 "config": [ 00:04:10.770 { 00:04:10.770 "method": "bdev_set_options", 00:04:10.770 "params": { 00:04:10.770 "bdev_io_pool_size": 65535, 00:04:10.770 "bdev_io_cache_size": 256, 00:04:10.770 "bdev_auto_examine": true, 00:04:10.770 "iobuf_small_cache_size": 128, 00:04:10.770 "iobuf_large_cache_size": 16 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "bdev_raid_set_options", 00:04:10.770 "params": { 00:04:10.770 "process_window_size_kb": 1024, 00:04:10.770 "process_max_bandwidth_mb_sec": 0 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "bdev_iscsi_set_options", 00:04:10.770 "params": { 00:04:10.770 "timeout_sec": 30 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "bdev_nvme_set_options", 00:04:10.770 "params": { 00:04:10.770 "action_on_timeout": "none", 00:04:10.770 "timeout_us": 0, 00:04:10.770 "timeout_admin_us": 0, 00:04:10.770 "keep_alive_timeout_ms": 10000, 00:04:10.770 "arbitration_burst": 0, 00:04:10.770 "low_priority_weight": 0, 00:04:10.770 "medium_priority_weight": 0, 00:04:10.770 "high_priority_weight": 0, 00:04:10.770 "nvme_adminq_poll_period_us": 10000, 00:04:10.770 "nvme_ioq_poll_period_us": 0, 00:04:10.770 "io_queue_requests": 0, 00:04:10.770 "delay_cmd_submit": true, 00:04:10.770 "transport_retry_count": 4, 00:04:10.770 "bdev_retry_count": 3, 00:04:10.770 "transport_ack_timeout": 0, 00:04:10.770 "ctrlr_loss_timeout_sec": 0, 00:04:10.770 "reconnect_delay_sec": 0, 00:04:10.770 "fast_io_fail_timeout_sec": 0, 00:04:10.770 "disable_auto_failback": false, 00:04:10.770 "generate_uuids": false, 00:04:10.770 "transport_tos": 0, 00:04:10.770 "nvme_error_stat": false, 00:04:10.770 "rdma_srq_size": 0, 00:04:10.770 "io_path_stat": false, 00:04:10.770 "allow_accel_sequence": false, 00:04:10.770 "rdma_max_cq_size": 0, 00:04:10.770 "rdma_cm_event_timeout_ms": 0, 00:04:10.770 "dhchap_digests": [ 00:04:10.770 "sha256", 00:04:10.770 "sha384", 00:04:10.770 "sha512" 00:04:10.770 ], 00:04:10.770 "dhchap_dhgroups": [ 00:04:10.770 "null", 00:04:10.770 "ffdhe2048", 00:04:10.770 "ffdhe3072", 00:04:10.770 "ffdhe4096", 00:04:10.770 "ffdhe6144", 00:04:10.770 "ffdhe8192" 00:04:10.770 ] 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "bdev_nvme_set_hotplug", 00:04:10.770 "params": { 00:04:10.770 "period_us": 100000, 00:04:10.770 "enable": false 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "bdev_wait_for_examine" 00:04:10.770 } 00:04:10.770 ] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "scsi", 00:04:10.770 "config": null 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "scheduler", 00:04:10.770 "config": [ 00:04:10.770 { 00:04:10.770 "method": "framework_set_scheduler", 00:04:10.770 "params": { 00:04:10.770 "name": "static" 00:04:10.770 } 00:04:10.770 } 00:04:10.770 ] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "vhost_scsi", 00:04:10.770 "config": [] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "vhost_blk", 00:04:10.770 "config": [] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "ublk", 00:04:10.770 "config": [] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "nbd", 00:04:10.770 "config": [] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "nvmf", 00:04:10.770 "config": [ 00:04:10.770 { 00:04:10.770 "method": "nvmf_set_config", 00:04:10.770 "params": { 00:04:10.770 "discovery_filter": "match_any", 00:04:10.770 "admin_cmd_passthru": { 00:04:10.770 "identify_ctrlr": false 00:04:10.770 }, 00:04:10.770 "dhchap_digests": [ 00:04:10.770 "sha256", 00:04:10.770 "sha384", 00:04:10.770 "sha512" 00:04:10.770 ], 00:04:10.770 "dhchap_dhgroups": [ 00:04:10.770 "null", 00:04:10.770 "ffdhe2048", 00:04:10.770 "ffdhe3072", 00:04:10.770 "ffdhe4096", 00:04:10.770 "ffdhe6144", 00:04:10.770 "ffdhe8192" 00:04:10.770 ] 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "nvmf_set_max_subsystems", 00:04:10.770 "params": { 00:04:10.770 "max_subsystems": 1024 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "nvmf_set_crdt", 00:04:10.770 "params": { 00:04:10.770 "crdt1": 0, 00:04:10.770 "crdt2": 0, 00:04:10.770 "crdt3": 0 00:04:10.770 } 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "method": "nvmf_create_transport", 00:04:10.770 "params": { 00:04:10.770 "trtype": "TCP", 00:04:10.770 "max_queue_depth": 128, 00:04:10.770 "max_io_qpairs_per_ctrlr": 127, 00:04:10.770 "in_capsule_data_size": 4096, 00:04:10.770 "max_io_size": 131072, 00:04:10.770 "io_unit_size": 131072, 00:04:10.770 "max_aq_depth": 128, 00:04:10.770 "num_shared_buffers": 511, 00:04:10.770 "buf_cache_size": 4294967295, 00:04:10.770 "dif_insert_or_strip": false, 00:04:10.770 "zcopy": false, 00:04:10.770 "c2h_success": true, 00:04:10.770 "sock_priority": 0, 00:04:10.770 "abort_timeout_sec": 1, 00:04:10.770 "ack_timeout": 0, 00:04:10.770 "data_wr_pool_size": 0 00:04:10.770 } 00:04:10.770 } 00:04:10.770 ] 00:04:10.770 }, 00:04:10.770 { 00:04:10.770 "subsystem": "iscsi", 00:04:10.770 "config": [ 00:04:10.770 { 00:04:10.770 "method": "iscsi_set_options", 00:04:10.770 "params": { 00:04:10.770 "node_base": "iqn.2016-06.io.spdk", 00:04:10.770 "max_sessions": 128, 00:04:10.770 "max_connections_per_session": 2, 00:04:10.770 "max_queue_depth": 64, 00:04:10.770 "default_time2wait": 2, 00:04:10.770 "default_time2retain": 20, 00:04:10.770 "first_burst_length": 8192, 00:04:10.770 "immediate_data": true, 00:04:10.770 "allow_duplicated_isid": false, 00:04:10.770 "error_recovery_level": 0, 00:04:10.770 "nop_timeout": 60, 00:04:10.770 "nop_in_interval": 30, 00:04:10.770 "disable_chap": false, 00:04:10.770 "require_chap": false, 00:04:10.770 "mutual_chap": false, 00:04:10.770 "chap_group": 0, 00:04:10.770 "max_large_datain_per_connection": 64, 00:04:10.770 "max_r2t_per_connection": 4, 00:04:10.770 "pdu_pool_size": 36864, 00:04:10.770 "immediate_data_pool_size": 16384, 00:04:10.770 "data_out_pool_size": 2048 00:04:10.770 } 00:04:10.770 } 00:04:10.770 ] 00:04:10.770 } 00:04:10.770 ] 00:04:10.770 } 00:04:10.770 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:10.770 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57042 00:04:10.770 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57042 ']' 00:04:10.770 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57042 00:04:10.770 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:10.771 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.771 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57042 00:04:10.771 killing process with pid 57042 00:04:10.771 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.771 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.771 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57042' 00:04:10.771 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57042 00:04:10.771 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57042 00:04:11.054 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57062 00:04:11.054 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.054 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57062 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57062 ']' 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57062 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57062 00:04:16.344 killing process with pid 57062 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57062' 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57062 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57062 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.344 12:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.344 ************************************ 00:04:16.344 END TEST skip_rpc_with_json 00:04:16.344 ************************************ 00:04:16.344 00:04:16.344 real 0m6.184s 00:04:16.344 user 0m5.926s 00:04:16.345 sys 0m0.448s 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.345 12:39:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:16.345 12:39:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.345 12:39:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.345 12:39:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.345 ************************************ 00:04:16.345 START TEST skip_rpc_with_delay 00:04:16.345 ************************************ 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:16.345 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.345 12:39:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:16.345 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.345 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:16.345 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.604 [2024-11-15 12:39:25.058219] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:16.604 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:16.604 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:16.604 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:16.604 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:16.604 00:04:16.604 real 0m0.074s 00:04:16.604 user 0m0.047s 00:04:16.604 sys 0m0.027s 00:04:16.604 ************************************ 00:04:16.604 END TEST skip_rpc_with_delay 00:04:16.604 ************************************ 00:04:16.604 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.604 12:39:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:16.604 12:39:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:16.604 12:39:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:16.604 12:39:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:16.604 12:39:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.604 12:39:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.604 12:39:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.604 ************************************ 00:04:16.604 START TEST exit_on_failed_rpc_init 00:04:16.604 ************************************ 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57166 00:04:16.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57166 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57166 ']' 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.604 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.604 [2024-11-15 12:39:25.190545] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:16.604 [2024-11-15 12:39:25.190657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57166 ] 00:04:16.863 [2024-11-15 12:39:25.332676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.863 [2024-11-15 12:39:25.365788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.863 [2024-11-15 12:39:25.406971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.121 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.122 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.122 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:17.122 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.122 [2024-11-15 12:39:25.610506] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:17.122 [2024-11-15 12:39:25.610621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57176 ] 00:04:17.122 [2024-11-15 12:39:25.763901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.381 [2024-11-15 12:39:25.804160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.381 [2024-11-15 12:39:25.804261] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:17.381 [2024-11-15 12:39:25.804279] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:17.381 [2024-11-15 12:39:25.804289] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57166 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57166 ']' 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57166 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57166 00:04:17.381 killing process with pid 57166 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57166' 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57166 00:04:17.381 12:39:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57166 00:04:17.639 ************************************ 00:04:17.639 END TEST exit_on_failed_rpc_init 00:04:17.639 ************************************ 00:04:17.639 00:04:17.639 real 0m1.011s 00:04:17.639 user 0m1.186s 00:04:17.639 sys 0m0.276s 00:04:17.639 12:39:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.639 12:39:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.639 12:39:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.639 00:04:17.639 real 0m12.955s 00:04:17.639 user 0m12.352s 00:04:17.639 sys 0m1.146s 00:04:17.639 ************************************ 00:04:17.639 END TEST skip_rpc 00:04:17.639 ************************************ 00:04:17.639 12:39:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.639 12:39:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.639 12:39:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:17.639 12:39:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.639 12:39:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.639 12:39:26 -- common/autotest_common.sh@10 -- # set +x 00:04:17.639 ************************************ 00:04:17.639 START TEST rpc_client 00:04:17.639 ************************************ 00:04:17.639 12:39:26 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:17.898 * Looking for test storage... 00:04:17.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.898 12:39:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.898 --rc genhtml_branch_coverage=1 00:04:17.898 --rc genhtml_function_coverage=1 00:04:17.898 --rc genhtml_legend=1 00:04:17.898 --rc geninfo_all_blocks=1 00:04:17.898 --rc geninfo_unexecuted_blocks=1 00:04:17.898 00:04:17.898 ' 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.898 --rc genhtml_branch_coverage=1 00:04:17.898 --rc genhtml_function_coverage=1 00:04:17.898 --rc genhtml_legend=1 00:04:17.898 --rc geninfo_all_blocks=1 00:04:17.898 --rc geninfo_unexecuted_blocks=1 00:04:17.898 00:04:17.898 ' 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.898 --rc genhtml_branch_coverage=1 00:04:17.898 --rc genhtml_function_coverage=1 00:04:17.898 --rc genhtml_legend=1 00:04:17.898 --rc geninfo_all_blocks=1 00:04:17.898 --rc geninfo_unexecuted_blocks=1 00:04:17.898 00:04:17.898 ' 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.898 --rc genhtml_branch_coverage=1 00:04:17.898 --rc genhtml_function_coverage=1 00:04:17.898 --rc genhtml_legend=1 00:04:17.898 --rc geninfo_all_blocks=1 00:04:17.898 --rc geninfo_unexecuted_blocks=1 00:04:17.898 00:04:17.898 ' 00:04:17.898 12:39:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:17.898 OK 00:04:17.898 12:39:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:17.898 00:04:17.898 real 0m0.214s 00:04:17.898 user 0m0.136s 00:04:17.898 sys 0m0.085s 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.898 12:39:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:17.898 ************************************ 00:04:17.898 END TEST rpc_client 00:04:17.898 ************************************ 00:04:17.898 12:39:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:17.898 12:39:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.898 12:39:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.898 12:39:26 -- common/autotest_common.sh@10 -- # set +x 00:04:17.898 ************************************ 00:04:17.898 START TEST json_config 00:04:17.898 ************************************ 00:04:17.898 12:39:26 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:17.898 12:39:26 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.898 12:39:26 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.899 12:39:26 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.159 12:39:26 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.159 12:39:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.159 12:39:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.159 12:39:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.159 12:39:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.159 12:39:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.159 12:39:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.159 12:39:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.159 12:39:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:18.159 12:39:26 json_config -- scripts/common.sh@345 -- # : 1 00:04:18.159 12:39:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.159 12:39:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.159 12:39:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:18.159 12:39:26 json_config -- scripts/common.sh@353 -- # local d=1 00:04:18.159 12:39:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.159 12:39:26 json_config -- scripts/common.sh@355 -- # echo 1 00:04:18.159 12:39:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.159 12:39:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@353 -- # local d=2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.159 12:39:26 json_config -- scripts/common.sh@355 -- # echo 2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.159 12:39:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.159 12:39:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.159 12:39:26 json_config -- scripts/common.sh@368 -- # return 0 00:04:18.159 12:39:26 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.159 12:39:26 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.159 --rc genhtml_branch_coverage=1 00:04:18.159 --rc genhtml_function_coverage=1 00:04:18.159 --rc genhtml_legend=1 00:04:18.159 --rc geninfo_all_blocks=1 00:04:18.159 --rc geninfo_unexecuted_blocks=1 00:04:18.159 00:04:18.159 ' 00:04:18.159 12:39:26 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.159 --rc genhtml_branch_coverage=1 00:04:18.159 --rc genhtml_function_coverage=1 00:04:18.159 --rc genhtml_legend=1 00:04:18.159 --rc geninfo_all_blocks=1 00:04:18.159 --rc geninfo_unexecuted_blocks=1 00:04:18.159 00:04:18.159 ' 00:04:18.159 12:39:26 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.159 --rc genhtml_branch_coverage=1 00:04:18.159 --rc genhtml_function_coverage=1 00:04:18.159 --rc genhtml_legend=1 00:04:18.159 --rc geninfo_all_blocks=1 00:04:18.159 --rc geninfo_unexecuted_blocks=1 00:04:18.159 00:04:18.159 ' 00:04:18.159 12:39:26 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.159 --rc genhtml_branch_coverage=1 00:04:18.159 --rc genhtml_function_coverage=1 00:04:18.159 --rc genhtml_legend=1 00:04:18.159 --rc geninfo_all_blocks=1 00:04:18.159 --rc geninfo_unexecuted_blocks=1 00:04:18.159 00:04:18.159 ' 00:04:18.159 12:39:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:18.159 12:39:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:18.159 12:39:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.159 12:39:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.159 12:39:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.159 12:39:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.159 12:39:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.159 12:39:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.159 12:39:26 json_config -- paths/export.sh@5 -- # export PATH 00:04:18.159 12:39:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@51 -- # : 0 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:18.159 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:18.159 12:39:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:18.160 INFO: JSON configuration test init 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 Waiting for target to run... 00:04:18.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.160 12:39:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:18.160 12:39:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:18.160 12:39:26 json_config -- json_config/common.sh@10 -- # shift 00:04:18.160 12:39:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.160 12:39:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.160 12:39:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.160 12:39:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.160 12:39:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.160 12:39:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57310 00:04:18.160 12:39:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.160 12:39:26 json_config -- json_config/common.sh@25 -- # waitforlisten 57310 /var/tmp/spdk_tgt.sock 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 57310 ']' 00:04:18.160 12:39:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.160 12:39:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 [2024-11-15 12:39:26.773347] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:18.160 [2024-11-15 12:39:26.773652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57310 ] 00:04:18.727 [2024-11-15 12:39:27.106195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.727 [2024-11-15 12:39:27.130529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.295 12:39:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.295 12:39:27 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:19.295 12:39:27 json_config -- json_config/common.sh@26 -- # echo '' 00:04:19.295 00:04:19.295 12:39:27 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:19.295 12:39:27 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:19.295 12:39:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.295 12:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.295 12:39:27 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:19.295 12:39:27 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:19.295 12:39:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.295 12:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.295 12:39:27 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:19.295 12:39:27 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:19.295 12:39:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:19.554 [2024-11-15 12:39:28.192319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:19.813 12:39:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.813 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:19.813 12:39:28 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:19.813 12:39:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@54 -- # sort 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:20.072 12:39:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.072 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:20.072 12:39:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.072 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:20.072 12:39:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:20.072 12:39:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:20.639 MallocForNvmf0 00:04:20.639 12:39:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:20.639 12:39:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:20.639 MallocForNvmf1 00:04:20.639 12:39:29 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.639 12:39:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.898 [2024-11-15 12:39:29.485770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.898 12:39:29 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:20.898 12:39:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:21.157 12:39:29 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:21.157 12:39:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:21.415 12:39:29 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:21.415 12:39:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:21.674 12:39:30 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:21.674 12:39:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:21.933 [2024-11-15 12:39:30.378296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.933 12:39:30 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:21.933 12:39:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.933 12:39:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.933 12:39:30 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:21.933 12:39:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.933 12:39:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.933 12:39:30 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:21.933 12:39:30 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:21.933 12:39:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:22.192 MallocBdevForConfigChangeCheck 00:04:22.192 12:39:30 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:22.192 12:39:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.192 12:39:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.192 12:39:30 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:22.192 12:39:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.759 INFO: shutting down applications... 00:04:22.759 12:39:31 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:22.759 12:39:31 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:22.759 12:39:31 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:22.759 12:39:31 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:22.759 12:39:31 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:22.759 Calling clear_iscsi_subsystem 00:04:22.759 Calling clear_nvmf_subsystem 00:04:22.759 Calling clear_nbd_subsystem 00:04:22.759 Calling clear_ublk_subsystem 00:04:22.759 Calling clear_vhost_blk_subsystem 00:04:22.759 Calling clear_vhost_scsi_subsystem 00:04:22.759 Calling clear_bdev_subsystem 00:04:23.017 12:39:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:23.017 12:39:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:23.017 12:39:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:23.017 12:39:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.017 12:39:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:23.017 12:39:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:23.275 12:39:31 json_config -- json_config/json_config.sh@352 -- # break 00:04:23.275 12:39:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:23.275 12:39:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:23.275 12:39:31 json_config -- json_config/common.sh@31 -- # local app=target 00:04:23.275 12:39:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.275 12:39:31 json_config -- json_config/common.sh@35 -- # [[ -n 57310 ]] 00:04:23.275 12:39:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57310 00:04:23.275 12:39:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.275 12:39:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.275 12:39:31 json_config -- json_config/common.sh@41 -- # kill -0 57310 00:04:23.275 12:39:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.843 12:39:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.843 12:39:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.843 12:39:32 json_config -- json_config/common.sh@41 -- # kill -0 57310 00:04:23.843 12:39:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.843 12:39:32 json_config -- json_config/common.sh@43 -- # break 00:04:23.843 SPDK target shutdown done 00:04:23.843 INFO: relaunching applications... 00:04:23.843 12:39:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.843 12:39:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.844 12:39:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:23.844 12:39:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.844 12:39:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:23.844 12:39:32 json_config -- json_config/common.sh@10 -- # shift 00:04:23.844 12:39:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.844 12:39:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.844 12:39:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.844 12:39:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.844 12:39:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.844 12:39:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57506 00:04:23.844 12:39:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.844 12:39:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.844 Waiting for target to run... 00:04:23.844 12:39:32 json_config -- json_config/common.sh@25 -- # waitforlisten 57506 /var/tmp/spdk_tgt.sock 00:04:23.844 12:39:32 json_config -- common/autotest_common.sh@835 -- # '[' -z 57506 ']' 00:04:23.844 12:39:32 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.844 12:39:32 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.844 12:39:32 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.844 12:39:32 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.844 12:39:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.844 [2024-11-15 12:39:32.440660] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:23.844 [2024-11-15 12:39:32.441000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57506 ] 00:04:24.102 [2024-11-15 12:39:32.736839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.102 [2024-11-15 12:39:32.757733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.361 [2024-11-15 12:39:32.888117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:24.620 [2024-11-15 12:39:33.083770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.620 [2024-11-15 12:39:33.115789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.879 00:04:24.879 INFO: Checking if target configuration is the same... 00:04:24.879 12:39:33 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.879 12:39:33 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:24.879 12:39:33 json_config -- json_config/common.sh@26 -- # echo '' 00:04:24.879 12:39:33 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:24.879 12:39:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:24.879 12:39:33 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:24.879 12:39:33 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:24.879 12:39:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.879 + '[' 2 -ne 2 ']' 00:04:24.879 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:24.879 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:24.879 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:24.879 +++ basename /dev/fd/62 00:04:24.879 ++ mktemp /tmp/62.XXX 00:04:24.879 + tmp_file_1=/tmp/62.WpD 00:04:24.879 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:24.879 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:24.879 + tmp_file_2=/tmp/spdk_tgt_config.json.UpB 00:04:24.879 + ret=0 00:04:24.879 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:25.137 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:25.395 + diff -u /tmp/62.WpD /tmp/spdk_tgt_config.json.UpB 00:04:25.395 INFO: JSON config files are the same 00:04:25.395 + echo 'INFO: JSON config files are the same' 00:04:25.395 + rm /tmp/62.WpD /tmp/spdk_tgt_config.json.UpB 00:04:25.395 + exit 0 00:04:25.395 INFO: changing configuration and checking if this can be detected... 00:04:25.395 12:39:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:25.395 12:39:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:25.395 12:39:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:25.395 12:39:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:25.653 12:39:34 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:25.653 12:39:34 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:25.653 12:39:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.653 + '[' 2 -ne 2 ']' 00:04:25.653 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:25.653 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:25.653 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:25.653 +++ basename /dev/fd/62 00:04:25.653 ++ mktemp /tmp/62.XXX 00:04:25.653 + tmp_file_1=/tmp/62.NEV 00:04:25.653 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:25.653 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:25.653 + tmp_file_2=/tmp/spdk_tgt_config.json.sXl 00:04:25.653 + ret=0 00:04:25.653 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:25.912 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:25.912 + diff -u /tmp/62.NEV /tmp/spdk_tgt_config.json.sXl 00:04:25.912 + ret=1 00:04:25.912 + echo '=== Start of file: /tmp/62.NEV ===' 00:04:25.912 + cat /tmp/62.NEV 00:04:25.912 + echo '=== End of file: /tmp/62.NEV ===' 00:04:25.912 + echo '' 00:04:25.912 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sXl ===' 00:04:25.912 + cat /tmp/spdk_tgt_config.json.sXl 00:04:25.912 + echo '=== End of file: /tmp/spdk_tgt_config.json.sXl ===' 00:04:25.912 + echo '' 00:04:25.912 + rm /tmp/62.NEV /tmp/spdk_tgt_config.json.sXl 00:04:25.912 + exit 1 00:04:26.171 INFO: configuration change detected. 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@324 -- # [[ -n 57506 ]] 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@330 -- # killprocess 57506 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@954 -- # '[' -z 57506 ']' 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@958 -- # kill -0 57506 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@959 -- # uname 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57506 00:04:26.171 killing process with pid 57506 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57506' 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@973 -- # kill 57506 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@978 -- # wait 57506 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:26.171 12:39:34 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.171 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.431 INFO: Success 00:04:26.431 12:39:34 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:26.431 12:39:34 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:26.431 00:04:26.431 real 0m8.364s 00:04:26.431 user 0m12.172s 00:04:26.431 sys 0m1.400s 00:04:26.431 ************************************ 00:04:26.431 END TEST json_config 00:04:26.431 ************************************ 00:04:26.431 12:39:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.431 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.431 12:39:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:26.431 12:39:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.431 12:39:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.431 12:39:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.431 ************************************ 00:04:26.431 START TEST json_config_extra_key 00:04:26.431 ************************************ 00:04:26.431 12:39:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:26.431 12:39:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.431 12:39:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.431 12:39:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.431 12:39:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:26.431 12:39:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.431 12:39:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.431 --rc genhtml_branch_coverage=1 00:04:26.431 --rc genhtml_function_coverage=1 00:04:26.431 --rc genhtml_legend=1 00:04:26.431 --rc geninfo_all_blocks=1 00:04:26.431 --rc geninfo_unexecuted_blocks=1 00:04:26.431 00:04:26.431 ' 00:04:26.431 12:39:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.431 --rc genhtml_branch_coverage=1 00:04:26.431 --rc genhtml_function_coverage=1 00:04:26.431 --rc genhtml_legend=1 00:04:26.431 --rc geninfo_all_blocks=1 00:04:26.431 --rc geninfo_unexecuted_blocks=1 00:04:26.431 00:04:26.431 ' 00:04:26.431 12:39:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.431 --rc genhtml_branch_coverage=1 00:04:26.431 --rc genhtml_function_coverage=1 00:04:26.431 --rc genhtml_legend=1 00:04:26.431 --rc geninfo_all_blocks=1 00:04:26.431 --rc geninfo_unexecuted_blocks=1 00:04:26.431 00:04:26.431 ' 00:04:26.431 12:39:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.431 --rc genhtml_branch_coverage=1 00:04:26.431 --rc genhtml_function_coverage=1 00:04:26.431 --rc genhtml_legend=1 00:04:26.431 --rc geninfo_all_blocks=1 00:04:26.431 --rc geninfo_unexecuted_blocks=1 00:04:26.431 00:04:26.431 ' 00:04:26.431 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.431 12:39:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.431 12:39:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.691 12:39:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.691 12:39:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.691 12:39:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.691 12:39:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 12:39:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 12:39:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 12:39:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:26.691 12:39:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.691 12:39:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:26.691 INFO: launching applications... 00:04:26.691 Waiting for target to run... 00:04:26.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:26.691 12:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.691 12:39:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57660 00:04:26.692 12:39:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.692 12:39:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57660 /var/tmp/spdk_tgt.sock 00:04:26.692 12:39:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57660 ']' 00:04:26.692 12:39:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.692 12:39:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.692 12:39:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.692 12:39:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.692 12:39:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.692 12:39:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:26.692 [2024-11-15 12:39:35.175143] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:26.692 [2024-11-15 12:39:35.175467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57660 ] 00:04:26.951 [2024-11-15 12:39:35.488349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.951 [2024-11-15 12:39:35.510048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.951 [2024-11-15 12:39:35.534878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:27.887 12:39:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.887 12:39:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:27.887 00:04:27.887 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:27.887 INFO: shutting down applications... 00:04:27.887 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57660 ]] 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57660 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57660 00:04:27.887 12:39:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.152 12:39:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.152 12:39:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.152 12:39:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57660 00:04:28.152 12:39:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:28.152 12:39:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:28.152 12:39:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:28.152 12:39:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:28.152 SPDK target shutdown done 00:04:28.152 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:28.152 Success 00:04:28.152 ************************************ 00:04:28.152 END TEST json_config_extra_key 00:04:28.152 ************************************ 00:04:28.152 00:04:28.152 real 0m1.787s 00:04:28.152 user 0m1.660s 00:04:28.152 sys 0m0.311s 00:04:28.152 12:39:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.152 12:39:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.152 12:39:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:28.152 12:39:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.152 12:39:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.152 12:39:36 -- common/autotest_common.sh@10 -- # set +x 00:04:28.152 ************************************ 00:04:28.152 START TEST alias_rpc 00:04:28.152 ************************************ 00:04:28.152 12:39:36 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:28.418 * Looking for test storage... 00:04:28.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.418 12:39:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.418 --rc genhtml_branch_coverage=1 00:04:28.418 --rc genhtml_function_coverage=1 00:04:28.418 --rc genhtml_legend=1 00:04:28.418 --rc geninfo_all_blocks=1 00:04:28.418 --rc geninfo_unexecuted_blocks=1 00:04:28.418 00:04:28.418 ' 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.418 --rc genhtml_branch_coverage=1 00:04:28.418 --rc genhtml_function_coverage=1 00:04:28.418 --rc genhtml_legend=1 00:04:28.418 --rc geninfo_all_blocks=1 00:04:28.418 --rc geninfo_unexecuted_blocks=1 00:04:28.418 00:04:28.418 ' 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.418 --rc genhtml_branch_coverage=1 00:04:28.418 --rc genhtml_function_coverage=1 00:04:28.418 --rc genhtml_legend=1 00:04:28.418 --rc geninfo_all_blocks=1 00:04:28.418 --rc geninfo_unexecuted_blocks=1 00:04:28.418 00:04:28.418 ' 00:04:28.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.418 --rc genhtml_branch_coverage=1 00:04:28.418 --rc genhtml_function_coverage=1 00:04:28.418 --rc genhtml_legend=1 00:04:28.418 --rc geninfo_all_blocks=1 00:04:28.418 --rc geninfo_unexecuted_blocks=1 00:04:28.418 00:04:28.418 ' 00:04:28.418 12:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:28.418 12:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57732 00:04:28.418 12:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57732 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57732 ']' 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.418 12:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.418 12:39:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.418 [2024-11-15 12:39:36.995206] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:28.418 [2024-11-15 12:39:36.995296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57732 ] 00:04:28.677 [2024-11-15 12:39:37.141305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.677 [2024-11-15 12:39:37.170632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.677 [2024-11-15 12:39:37.206545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:28.677 12:39:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.677 12:39:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:28.677 12:39:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:29.245 12:39:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57732 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57732 ']' 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57732 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57732 00:04:29.245 killing process with pid 57732 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57732' 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 57732 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 57732 00:04:29.245 ************************************ 00:04:29.245 END TEST alias_rpc 00:04:29.245 ************************************ 00:04:29.245 00:04:29.245 real 0m1.126s 00:04:29.245 user 0m1.287s 00:04:29.245 sys 0m0.325s 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.245 12:39:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.504 12:39:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:29.504 12:39:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:29.504 12:39:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.504 12:39:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.504 12:39:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.504 ************************************ 00:04:29.504 START TEST spdkcli_tcp 00:04:29.504 ************************************ 00:04:29.504 12:39:37 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:29.504 * Looking for test storage... 00:04:29.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.504 12:39:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 12:39:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:29.504 12:39:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57809 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57809 00:04:29.505 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57809 ']' 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.505 12:39:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.763 [2024-11-15 12:39:38.172829] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:29.763 [2024-11-15 12:39:38.172940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57809 ] 00:04:29.763 [2024-11-15 12:39:38.319302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.763 [2024-11-15 12:39:38.351039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.763 [2024-11-15 12:39:38.351047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.763 [2024-11-15 12:39:38.389803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:30.022 12:39:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.022 12:39:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:30.022 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57813 00:04:30.022 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:30.022 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:30.282 [ 00:04:30.282 "bdev_malloc_delete", 00:04:30.282 "bdev_malloc_create", 00:04:30.282 "bdev_null_resize", 00:04:30.282 "bdev_null_delete", 00:04:30.282 "bdev_null_create", 00:04:30.282 "bdev_nvme_cuse_unregister", 00:04:30.282 "bdev_nvme_cuse_register", 00:04:30.282 "bdev_opal_new_user", 00:04:30.282 "bdev_opal_set_lock_state", 00:04:30.282 "bdev_opal_delete", 00:04:30.282 "bdev_opal_get_info", 00:04:30.282 "bdev_opal_create", 00:04:30.282 "bdev_nvme_opal_revert", 00:04:30.282 "bdev_nvme_opal_init", 00:04:30.282 "bdev_nvme_send_cmd", 00:04:30.282 "bdev_nvme_set_keys", 00:04:30.282 "bdev_nvme_get_path_iostat", 00:04:30.282 "bdev_nvme_get_mdns_discovery_info", 00:04:30.282 "bdev_nvme_stop_mdns_discovery", 00:04:30.282 "bdev_nvme_start_mdns_discovery", 00:04:30.282 "bdev_nvme_set_multipath_policy", 00:04:30.282 "bdev_nvme_set_preferred_path", 00:04:30.282 "bdev_nvme_get_io_paths", 00:04:30.282 "bdev_nvme_remove_error_injection", 00:04:30.282 "bdev_nvme_add_error_injection", 00:04:30.282 "bdev_nvme_get_discovery_info", 00:04:30.282 "bdev_nvme_stop_discovery", 00:04:30.282 "bdev_nvme_start_discovery", 00:04:30.282 "bdev_nvme_get_controller_health_info", 00:04:30.282 "bdev_nvme_disable_controller", 00:04:30.282 "bdev_nvme_enable_controller", 00:04:30.282 "bdev_nvme_reset_controller", 00:04:30.282 "bdev_nvme_get_transport_statistics", 00:04:30.282 "bdev_nvme_apply_firmware", 00:04:30.282 "bdev_nvme_detach_controller", 00:04:30.282 "bdev_nvme_get_controllers", 00:04:30.282 "bdev_nvme_attach_controller", 00:04:30.282 "bdev_nvme_set_hotplug", 00:04:30.282 "bdev_nvme_set_options", 00:04:30.282 "bdev_passthru_delete", 00:04:30.282 "bdev_passthru_create", 00:04:30.282 "bdev_lvol_set_parent_bdev", 00:04:30.282 "bdev_lvol_set_parent", 00:04:30.282 "bdev_lvol_check_shallow_copy", 00:04:30.282 "bdev_lvol_start_shallow_copy", 00:04:30.282 "bdev_lvol_grow_lvstore", 00:04:30.282 "bdev_lvol_get_lvols", 00:04:30.282 "bdev_lvol_get_lvstores", 00:04:30.282 "bdev_lvol_delete", 00:04:30.282 "bdev_lvol_set_read_only", 00:04:30.282 "bdev_lvol_resize", 00:04:30.282 "bdev_lvol_decouple_parent", 00:04:30.282 "bdev_lvol_inflate", 00:04:30.282 "bdev_lvol_rename", 00:04:30.282 "bdev_lvol_clone_bdev", 00:04:30.282 "bdev_lvol_clone", 00:04:30.282 "bdev_lvol_snapshot", 00:04:30.282 "bdev_lvol_create", 00:04:30.282 "bdev_lvol_delete_lvstore", 00:04:30.282 "bdev_lvol_rename_lvstore", 00:04:30.282 "bdev_lvol_create_lvstore", 00:04:30.282 "bdev_raid_set_options", 00:04:30.282 "bdev_raid_remove_base_bdev", 00:04:30.282 "bdev_raid_add_base_bdev", 00:04:30.282 "bdev_raid_delete", 00:04:30.282 "bdev_raid_create", 00:04:30.282 "bdev_raid_get_bdevs", 00:04:30.282 "bdev_error_inject_error", 00:04:30.282 "bdev_error_delete", 00:04:30.282 "bdev_error_create", 00:04:30.282 "bdev_split_delete", 00:04:30.282 "bdev_split_create", 00:04:30.282 "bdev_delay_delete", 00:04:30.282 "bdev_delay_create", 00:04:30.282 "bdev_delay_update_latency", 00:04:30.282 "bdev_zone_block_delete", 00:04:30.282 "bdev_zone_block_create", 00:04:30.282 "blobfs_create", 00:04:30.282 "blobfs_detect", 00:04:30.282 "blobfs_set_cache_size", 00:04:30.282 "bdev_aio_delete", 00:04:30.282 "bdev_aio_rescan", 00:04:30.282 "bdev_aio_create", 00:04:30.282 "bdev_ftl_set_property", 00:04:30.282 "bdev_ftl_get_properties", 00:04:30.282 "bdev_ftl_get_stats", 00:04:30.282 "bdev_ftl_unmap", 00:04:30.282 "bdev_ftl_unload", 00:04:30.282 "bdev_ftl_delete", 00:04:30.282 "bdev_ftl_load", 00:04:30.282 "bdev_ftl_create", 00:04:30.282 "bdev_virtio_attach_controller", 00:04:30.282 "bdev_virtio_scsi_get_devices", 00:04:30.282 "bdev_virtio_detach_controller", 00:04:30.282 "bdev_virtio_blk_set_hotplug", 00:04:30.282 "bdev_iscsi_delete", 00:04:30.282 "bdev_iscsi_create", 00:04:30.282 "bdev_iscsi_set_options", 00:04:30.282 "bdev_uring_delete", 00:04:30.282 "bdev_uring_rescan", 00:04:30.282 "bdev_uring_create", 00:04:30.282 "accel_error_inject_error", 00:04:30.282 "ioat_scan_accel_module", 00:04:30.282 "dsa_scan_accel_module", 00:04:30.282 "iaa_scan_accel_module", 00:04:30.282 "keyring_file_remove_key", 00:04:30.282 "keyring_file_add_key", 00:04:30.282 "keyring_linux_set_options", 00:04:30.282 "fsdev_aio_delete", 00:04:30.282 "fsdev_aio_create", 00:04:30.282 "iscsi_get_histogram", 00:04:30.282 "iscsi_enable_histogram", 00:04:30.282 "iscsi_set_options", 00:04:30.282 "iscsi_get_auth_groups", 00:04:30.282 "iscsi_auth_group_remove_secret", 00:04:30.282 "iscsi_auth_group_add_secret", 00:04:30.282 "iscsi_delete_auth_group", 00:04:30.282 "iscsi_create_auth_group", 00:04:30.282 "iscsi_set_discovery_auth", 00:04:30.282 "iscsi_get_options", 00:04:30.282 "iscsi_target_node_request_logout", 00:04:30.282 "iscsi_target_node_set_redirect", 00:04:30.282 "iscsi_target_node_set_auth", 00:04:30.282 "iscsi_target_node_add_lun", 00:04:30.282 "iscsi_get_stats", 00:04:30.282 "iscsi_get_connections", 00:04:30.282 "iscsi_portal_group_set_auth", 00:04:30.282 "iscsi_start_portal_group", 00:04:30.282 "iscsi_delete_portal_group", 00:04:30.282 "iscsi_create_portal_group", 00:04:30.282 "iscsi_get_portal_groups", 00:04:30.282 "iscsi_delete_target_node", 00:04:30.282 "iscsi_target_node_remove_pg_ig_maps", 00:04:30.282 "iscsi_target_node_add_pg_ig_maps", 00:04:30.282 "iscsi_create_target_node", 00:04:30.282 "iscsi_get_target_nodes", 00:04:30.282 "iscsi_delete_initiator_group", 00:04:30.282 "iscsi_initiator_group_remove_initiators", 00:04:30.282 "iscsi_initiator_group_add_initiators", 00:04:30.282 "iscsi_create_initiator_group", 00:04:30.282 "iscsi_get_initiator_groups", 00:04:30.282 "nvmf_set_crdt", 00:04:30.282 "nvmf_set_config", 00:04:30.282 "nvmf_set_max_subsystems", 00:04:30.282 "nvmf_stop_mdns_prr", 00:04:30.282 "nvmf_publish_mdns_prr", 00:04:30.282 "nvmf_subsystem_get_listeners", 00:04:30.282 "nvmf_subsystem_get_qpairs", 00:04:30.282 "nvmf_subsystem_get_controllers", 00:04:30.282 "nvmf_get_stats", 00:04:30.282 "nvmf_get_transports", 00:04:30.282 "nvmf_create_transport", 00:04:30.282 "nvmf_get_targets", 00:04:30.282 "nvmf_delete_target", 00:04:30.282 "nvmf_create_target", 00:04:30.282 "nvmf_subsystem_allow_any_host", 00:04:30.282 "nvmf_subsystem_set_keys", 00:04:30.282 "nvmf_subsystem_remove_host", 00:04:30.282 "nvmf_subsystem_add_host", 00:04:30.282 "nvmf_ns_remove_host", 00:04:30.282 "nvmf_ns_add_host", 00:04:30.282 "nvmf_subsystem_remove_ns", 00:04:30.282 "nvmf_subsystem_set_ns_ana_group", 00:04:30.282 "nvmf_subsystem_add_ns", 00:04:30.282 "nvmf_subsystem_listener_set_ana_state", 00:04:30.282 "nvmf_discovery_get_referrals", 00:04:30.282 "nvmf_discovery_remove_referral", 00:04:30.282 "nvmf_discovery_add_referral", 00:04:30.283 "nvmf_subsystem_remove_listener", 00:04:30.283 "nvmf_subsystem_add_listener", 00:04:30.283 "nvmf_delete_subsystem", 00:04:30.283 "nvmf_create_subsystem", 00:04:30.283 "nvmf_get_subsystems", 00:04:30.283 "env_dpdk_get_mem_stats", 00:04:30.283 "nbd_get_disks", 00:04:30.283 "nbd_stop_disk", 00:04:30.283 "nbd_start_disk", 00:04:30.283 "ublk_recover_disk", 00:04:30.283 "ublk_get_disks", 00:04:30.283 "ublk_stop_disk", 00:04:30.283 "ublk_start_disk", 00:04:30.283 "ublk_destroy_target", 00:04:30.283 "ublk_create_target", 00:04:30.283 "virtio_blk_create_transport", 00:04:30.283 "virtio_blk_get_transports", 00:04:30.283 "vhost_controller_set_coalescing", 00:04:30.283 "vhost_get_controllers", 00:04:30.283 "vhost_delete_controller", 00:04:30.283 "vhost_create_blk_controller", 00:04:30.283 "vhost_scsi_controller_remove_target", 00:04:30.283 "vhost_scsi_controller_add_target", 00:04:30.283 "vhost_start_scsi_controller", 00:04:30.283 "vhost_create_scsi_controller", 00:04:30.283 "thread_set_cpumask", 00:04:30.283 "scheduler_set_options", 00:04:30.283 "framework_get_governor", 00:04:30.283 "framework_get_scheduler", 00:04:30.283 "framework_set_scheduler", 00:04:30.283 "framework_get_reactors", 00:04:30.283 "thread_get_io_channels", 00:04:30.283 "thread_get_pollers", 00:04:30.283 "thread_get_stats", 00:04:30.283 "framework_monitor_context_switch", 00:04:30.283 "spdk_kill_instance", 00:04:30.283 "log_enable_timestamps", 00:04:30.283 "log_get_flags", 00:04:30.283 "log_clear_flag", 00:04:30.283 "log_set_flag", 00:04:30.283 "log_get_level", 00:04:30.283 "log_set_level", 00:04:30.283 "log_get_print_level", 00:04:30.283 "log_set_print_level", 00:04:30.283 "framework_enable_cpumask_locks", 00:04:30.283 "framework_disable_cpumask_locks", 00:04:30.283 "framework_wait_init", 00:04:30.283 "framework_start_init", 00:04:30.283 "scsi_get_devices", 00:04:30.283 "bdev_get_histogram", 00:04:30.283 "bdev_enable_histogram", 00:04:30.283 "bdev_set_qos_limit", 00:04:30.283 "bdev_set_qd_sampling_period", 00:04:30.283 "bdev_get_bdevs", 00:04:30.283 "bdev_reset_iostat", 00:04:30.283 "bdev_get_iostat", 00:04:30.283 "bdev_examine", 00:04:30.283 "bdev_wait_for_examine", 00:04:30.283 "bdev_set_options", 00:04:30.283 "accel_get_stats", 00:04:30.283 "accel_set_options", 00:04:30.283 "accel_set_driver", 00:04:30.283 "accel_crypto_key_destroy", 00:04:30.283 "accel_crypto_keys_get", 00:04:30.283 "accel_crypto_key_create", 00:04:30.283 "accel_assign_opc", 00:04:30.283 "accel_get_module_info", 00:04:30.283 "accel_get_opc_assignments", 00:04:30.283 "vmd_rescan", 00:04:30.283 "vmd_remove_device", 00:04:30.283 "vmd_enable", 00:04:30.283 "sock_get_default_impl", 00:04:30.283 "sock_set_default_impl", 00:04:30.283 "sock_impl_set_options", 00:04:30.283 "sock_impl_get_options", 00:04:30.283 "iobuf_get_stats", 00:04:30.283 "iobuf_set_options", 00:04:30.283 "keyring_get_keys", 00:04:30.283 "framework_get_pci_devices", 00:04:30.283 "framework_get_config", 00:04:30.283 "framework_get_subsystems", 00:04:30.283 "fsdev_set_opts", 00:04:30.283 "fsdev_get_opts", 00:04:30.283 "trace_get_info", 00:04:30.283 "trace_get_tpoint_group_mask", 00:04:30.283 "trace_disable_tpoint_group", 00:04:30.283 "trace_enable_tpoint_group", 00:04:30.283 "trace_clear_tpoint_mask", 00:04:30.283 "trace_set_tpoint_mask", 00:04:30.283 "notify_get_notifications", 00:04:30.283 "notify_get_types", 00:04:30.283 "spdk_get_version", 00:04:30.283 "rpc_get_methods" 00:04:30.283 ] 00:04:30.283 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.283 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:30.283 12:39:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57809 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57809 ']' 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57809 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57809 00:04:30.283 killing process with pid 57809 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57809' 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57809 00:04:30.283 12:39:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57809 00:04:30.541 ************************************ 00:04:30.541 END TEST spdkcli_tcp 00:04:30.541 ************************************ 00:04:30.541 00:04:30.541 real 0m1.171s 00:04:30.541 user 0m2.090s 00:04:30.541 sys 0m0.335s 00:04:30.541 12:39:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.541 12:39:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.541 12:39:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.541 12:39:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.542 12:39:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.542 12:39:39 -- common/autotest_common.sh@10 -- # set +x 00:04:30.542 ************************************ 00:04:30.542 START TEST dpdk_mem_utility 00:04:30.542 ************************************ 00:04:30.542 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.801 * Looking for test storage... 00:04:30.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.801 12:39:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.801 --rc genhtml_branch_coverage=1 00:04:30.801 --rc genhtml_function_coverage=1 00:04:30.801 --rc genhtml_legend=1 00:04:30.801 --rc geninfo_all_blocks=1 00:04:30.801 --rc geninfo_unexecuted_blocks=1 00:04:30.801 00:04:30.801 ' 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.801 --rc genhtml_branch_coverage=1 00:04:30.801 --rc genhtml_function_coverage=1 00:04:30.801 --rc genhtml_legend=1 00:04:30.801 --rc geninfo_all_blocks=1 00:04:30.801 --rc geninfo_unexecuted_blocks=1 00:04:30.801 00:04:30.801 ' 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.801 --rc genhtml_branch_coverage=1 00:04:30.801 --rc genhtml_function_coverage=1 00:04:30.801 --rc genhtml_legend=1 00:04:30.801 --rc geninfo_all_blocks=1 00:04:30.801 --rc geninfo_unexecuted_blocks=1 00:04:30.801 00:04:30.801 ' 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.801 --rc genhtml_branch_coverage=1 00:04:30.801 --rc genhtml_function_coverage=1 00:04:30.801 --rc genhtml_legend=1 00:04:30.801 --rc geninfo_all_blocks=1 00:04:30.801 --rc geninfo_unexecuted_blocks=1 00:04:30.801 00:04:30.801 ' 00:04:30.801 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:30.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.801 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57895 00:04:30.801 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57895 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57895 ']' 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.801 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.801 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.801 [2024-11-15 12:39:39.402357] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:30.801 [2024-11-15 12:39:39.402464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57895 ] 00:04:31.060 [2024-11-15 12:39:39.550644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.060 [2024-11-15 12:39:39.579704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.060 [2024-11-15 12:39:39.615908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.320 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.320 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:31.320 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.320 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.320 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.320 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.320 { 00:04:31.320 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.320 } 00:04:31.320 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.320 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:31.320 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:31.320 1 heaps totaling size 810.000000 MiB 00:04:31.320 size: 810.000000 MiB heap id: 0 00:04:31.320 end heaps---------- 00:04:31.320 9 mempools totaling size 595.772034 MiB 00:04:31.320 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:31.320 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:31.320 size: 92.545471 MiB name: bdev_io_57895 00:04:31.320 size: 50.003479 MiB name: msgpool_57895 00:04:31.320 size: 36.509338 MiB name: fsdev_io_57895 00:04:31.320 size: 21.763794 MiB name: PDU_Pool 00:04:31.320 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:31.320 size: 4.133484 MiB name: evtpool_57895 00:04:31.320 size: 0.026123 MiB name: Session_Pool 00:04:31.320 end mempools------- 00:04:31.320 6 memzones totaling size 4.142822 MiB 00:04:31.320 size: 1.000366 MiB name: RG_ring_0_57895 00:04:31.320 size: 1.000366 MiB name: RG_ring_1_57895 00:04:31.320 size: 1.000366 MiB name: RG_ring_4_57895 00:04:31.320 size: 1.000366 MiB name: RG_ring_5_57895 00:04:31.320 size: 0.125366 MiB name: RG_ring_2_57895 00:04:31.320 size: 0.015991 MiB name: RG_ring_3_57895 00:04:31.320 end memzones------- 00:04:31.320 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.320 heap id: 0 total size: 810.000000 MiB number of busy elements: 315 number of free elements: 15 00:04:31.320 list of free elements. size: 10.812866 MiB 00:04:31.320 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:31.320 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:31.320 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:31.320 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:31.320 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:31.320 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:31.320 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:31.320 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:31.320 element at address: 0x20001a600000 with size: 0.567322 MiB 00:04:31.320 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:31.320 element at address: 0x200000c00000 with size: 0.487000 MiB 00:04:31.320 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:31.320 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:31.320 element at address: 0x200027a00000 with size: 0.395752 MiB 00:04:31.320 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:31.320 list of standard malloc elements. size: 199.268250 MiB 00:04:31.320 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:31.320 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:31.320 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:31.320 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:31.320 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:31.320 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:31.320 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:31.320 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:31.320 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:31.320 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:31.320 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:31.320 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:31.321 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:31.321 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:31.321 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691480 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691540 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691600 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691780 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691840 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691900 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692080 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692140 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692200 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692380 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692440 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692500 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692680 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692740 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692800 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692980 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693040 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693100 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693280 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693340 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693400 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693580 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693640 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693700 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693880 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693940 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a694000 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a694180 with size: 0.000183 MiB 00:04:31.321 element at address: 0x20001a694240 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694300 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694480 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694540 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694600 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694780 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694840 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694900 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a695080 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a695140 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a695200 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:31.322 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a65500 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:31.322 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:31.322 list of memzone associated elements. size: 599.918884 MiB 00:04:31.322 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:31.322 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.322 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:31.322 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:31.322 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:31.322 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57895_0 00:04:31.322 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:31.322 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57895_0 00:04:31.322 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:31.322 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57895_0 00:04:31.322 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:31.322 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:31.322 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:31.322 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.322 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:31.322 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57895_0 00:04:31.322 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:31.322 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57895 00:04:31.322 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:31.322 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57895 00:04:31.322 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:31.322 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.322 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:31.322 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.322 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:31.322 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.322 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:31.322 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.322 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:31.322 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57895 00:04:31.322 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:31.322 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57895 00:04:31.322 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:31.322 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57895 00:04:31.322 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:31.322 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57895 00:04:31.322 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:31.322 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57895 00:04:31.322 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:31.322 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57895 00:04:31.322 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:31.322 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.322 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:31.323 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.323 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:31.323 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.323 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:31.323 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57895 00:04:31.323 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:31.323 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57895 00:04:31.323 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:31.323 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.323 element at address: 0x200027a65680 with size: 0.023743 MiB 00:04:31.323 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.323 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:31.323 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57895 00:04:31.323 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:04:31.323 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.323 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:31.323 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57895 00:04:31.323 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:31.323 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57895 00:04:31.323 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:31.323 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57895 00:04:31.323 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:04:31.323 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.323 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.323 12:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57895 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57895 ']' 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57895 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57895 00:04:31.323 killing process with pid 57895 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57895' 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57895 00:04:31.323 12:39:39 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57895 00:04:31.581 00:04:31.581 real 0m1.007s 00:04:31.581 user 0m1.089s 00:04:31.581 sys 0m0.300s 00:04:31.581 ************************************ 00:04:31.581 END TEST dpdk_mem_utility 00:04:31.581 ************************************ 00:04:31.581 12:39:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.581 12:39:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.581 12:39:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:31.581 12:39:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.581 12:39:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.581 12:39:40 -- common/autotest_common.sh@10 -- # set +x 00:04:31.581 ************************************ 00:04:31.581 START TEST event 00:04:31.581 ************************************ 00:04:31.581 12:39:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:31.840 * Looking for test storage... 00:04:31.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:31.840 12:39:40 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.840 12:39:40 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.840 12:39:40 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.840 12:39:40 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.840 12:39:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.840 12:39:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.840 12:39:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.840 12:39:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.840 12:39:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.840 12:39:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.840 12:39:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.840 12:39:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.840 12:39:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.840 12:39:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.840 12:39:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.840 12:39:40 event -- scripts/common.sh@344 -- # case "$op" in 00:04:31.840 12:39:40 event -- scripts/common.sh@345 -- # : 1 00:04:31.840 12:39:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.840 12:39:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.840 12:39:40 event -- scripts/common.sh@365 -- # decimal 1 00:04:31.840 12:39:40 event -- scripts/common.sh@353 -- # local d=1 00:04:31.840 12:39:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.840 12:39:40 event -- scripts/common.sh@355 -- # echo 1 00:04:31.840 12:39:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.840 12:39:40 event -- scripts/common.sh@366 -- # decimal 2 00:04:31.840 12:39:40 event -- scripts/common.sh@353 -- # local d=2 00:04:31.840 12:39:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.840 12:39:40 event -- scripts/common.sh@355 -- # echo 2 00:04:31.840 12:39:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.841 12:39:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.841 12:39:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.841 12:39:40 event -- scripts/common.sh@368 -- # return 0 00:04:31.841 12:39:40 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.841 12:39:40 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 12:39:40 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 12:39:40 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 12:39:40 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 12:39:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:31.841 12:39:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:31.841 12:39:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.841 12:39:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:31.841 12:39:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.841 12:39:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.841 ************************************ 00:04:31.841 START TEST event_perf 00:04:31.841 ************************************ 00:04:31.841 12:39:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.841 Running I/O for 1 seconds...[2024-11-15 12:39:40.433836] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:31.841 [2024-11-15 12:39:40.434071] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57967 ] 00:04:32.100 [2024-11-15 12:39:40.578132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.100 [2024-11-15 12:39:40.609867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.100 [2024-11-15 12:39:40.610003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.100 [2024-11-15 12:39:40.610128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.100 Running I/O for 1 seconds...[2024-11-15 12:39:40.610129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.036 00:04:33.036 lcore 0: 207011 00:04:33.036 lcore 1: 207011 00:04:33.036 lcore 2: 207011 00:04:33.036 lcore 3: 207011 00:04:33.036 done. 00:04:33.036 00:04:33.036 real 0m1.238s 00:04:33.036 user 0m4.067s 00:04:33.036 ************************************ 00:04:33.036 END TEST event_perf 00:04:33.036 ************************************ 00:04:33.036 sys 0m0.050s 00:04:33.036 12:39:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.036 12:39:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.036 12:39:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:33.036 12:39:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:33.036 12:39:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.036 12:39:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.036 ************************************ 00:04:33.036 START TEST event_reactor 00:04:33.036 ************************************ 00:04:33.036 12:39:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:33.295 [2024-11-15 12:39:41.717422] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:33.295 [2024-11-15 12:39:41.717705] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58000 ] 00:04:33.295 [2024-11-15 12:39:41.861649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.295 [2024-11-15 12:39:41.888822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.673 test_start 00:04:34.673 oneshot 00:04:34.673 tick 100 00:04:34.673 tick 100 00:04:34.673 tick 250 00:04:34.673 tick 100 00:04:34.673 tick 100 00:04:34.673 tick 250 00:04:34.673 tick 500 00:04:34.673 tick 100 00:04:34.673 tick 100 00:04:34.673 tick 100 00:04:34.673 tick 250 00:04:34.673 tick 100 00:04:34.673 tick 100 00:04:34.673 test_end 00:04:34.673 00:04:34.673 real 0m1.225s 00:04:34.673 user 0m1.088s 00:04:34.673 sys 0m0.032s 00:04:34.673 ************************************ 00:04:34.673 END TEST event_reactor 00:04:34.673 ************************************ 00:04:34.673 12:39:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.673 12:39:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:34.673 12:39:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:34.673 12:39:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:34.673 12:39:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.674 12:39:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.674 ************************************ 00:04:34.674 START TEST event_reactor_perf 00:04:34.674 ************************************ 00:04:34.674 12:39:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:34.674 [2024-11-15 12:39:42.994960] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:34.674 [2024-11-15 12:39:42.995073] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58035 ] 00:04:34.674 [2024-11-15 12:39:43.134668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.674 [2024-11-15 12:39:43.161093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.609 test_start 00:04:35.609 test_end 00:04:35.609 Performance: 452784 events per second 00:04:35.610 00:04:35.610 real 0m1.218s 00:04:35.610 user 0m1.082s 00:04:35.610 sys 0m0.031s 00:04:35.610 12:39:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.610 ************************************ 00:04:35.610 END TEST event_reactor_perf 00:04:35.610 ************************************ 00:04:35.610 12:39:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.610 12:39:44 event -- event/event.sh@49 -- # uname -s 00:04:35.610 12:39:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:35.610 12:39:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:35.610 12:39:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.610 12:39:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.610 12:39:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.610 ************************************ 00:04:35.610 START TEST event_scheduler 00:04:35.610 ************************************ 00:04:35.610 12:39:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:35.869 * Looking for test storage... 00:04:35.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.869 12:39:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.869 --rc genhtml_branch_coverage=1 00:04:35.869 --rc genhtml_function_coverage=1 00:04:35.869 --rc genhtml_legend=1 00:04:35.869 --rc geninfo_all_blocks=1 00:04:35.869 --rc geninfo_unexecuted_blocks=1 00:04:35.869 00:04:35.869 ' 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.869 --rc genhtml_branch_coverage=1 00:04:35.869 --rc genhtml_function_coverage=1 00:04:35.869 --rc genhtml_legend=1 00:04:35.869 --rc geninfo_all_blocks=1 00:04:35.869 --rc geninfo_unexecuted_blocks=1 00:04:35.869 00:04:35.869 ' 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.869 --rc genhtml_branch_coverage=1 00:04:35.869 --rc genhtml_function_coverage=1 00:04:35.869 --rc genhtml_legend=1 00:04:35.869 --rc geninfo_all_blocks=1 00:04:35.869 --rc geninfo_unexecuted_blocks=1 00:04:35.869 00:04:35.869 ' 00:04:35.869 12:39:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.869 --rc genhtml_branch_coverage=1 00:04:35.869 --rc genhtml_function_coverage=1 00:04:35.869 --rc genhtml_legend=1 00:04:35.869 --rc geninfo_all_blocks=1 00:04:35.869 --rc geninfo_unexecuted_blocks=1 00:04:35.869 00:04:35.869 ' 00:04:35.869 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:35.869 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58105 00:04:35.870 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:35.870 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.870 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58105 00:04:35.870 12:39:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58105 ']' 00:04:35.870 12:39:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.870 12:39:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.870 12:39:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.870 12:39:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.870 12:39:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:35.870 [2024-11-15 12:39:44.465964] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:35.870 [2024-11-15 12:39:44.466069] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58105 ] 00:04:36.129 [2024-11-15 12:39:44.615839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.129 [2024-11-15 12:39:44.657920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.129 [2024-11-15 12:39:44.657969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.129 [2024-11-15 12:39:44.658052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.129 [2024-11-15 12:39:44.658059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.129 12:39:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.129 12:39:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:36.129 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:36.129 12:39:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.129 12:39:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.129 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:36.129 POWER: Cannot set governor of lcore 0 to userspace 00:04:36.129 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:36.129 POWER: Cannot set governor of lcore 0 to performance 00:04:36.129 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:36.129 POWER: Cannot set governor of lcore 0 to userspace 00:04:36.129 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:36.129 POWER: Cannot set governor of lcore 0 to userspace 00:04:36.129 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:36.129 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:36.129 POWER: Unable to set Power Management Environment for lcore 0 00:04:36.129 [2024-11-15 12:39:44.743904] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:36.129 [2024-11-15 12:39:44.743920] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:36.129 [2024-11-15 12:39:44.743931] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:36.129 [2024-11-15 12:39:44.743945] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:36.130 [2024-11-15 12:39:44.743954] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:36.130 [2024-11-15 12:39:44.743962] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:36.130 12:39:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.130 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:36.130 12:39:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.130 12:39:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.130 [2024-11-15 12:39:44.785031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:36.389 [2024-11-15 12:39:44.805176] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:36.389 12:39:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:36.389 12:39:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.389 12:39:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 ************************************ 00:04:36.389 START TEST scheduler_create_thread 00:04:36.389 ************************************ 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 2 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 3 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 4 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 5 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 6 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 7 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 8 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 9 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 10 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.389 12:39:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.767 12:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.767 12:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:37.767 12:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:37.767 12:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.767 12:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.145 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.145 00:04:39.145 real 0m2.611s 00:04:39.145 user 0m0.019s 00:04:39.145 sys 0m0.007s 00:04:39.145 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.145 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.145 ************************************ 00:04:39.145 END TEST scheduler_create_thread 00:04:39.145 ************************************ 00:04:39.145 12:39:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:39.145 12:39:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58105 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58105 ']' 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58105 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58105 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:39.145 killing process with pid 58105 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58105' 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58105 00:04:39.145 12:39:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58105 00:04:39.404 [2024-11-15 12:39:47.908183] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:39.404 00:04:39.404 real 0m3.797s 00:04:39.404 user 0m5.754s 00:04:39.404 sys 0m0.274s 00:04:39.404 ************************************ 00:04:39.404 END TEST event_scheduler 00:04:39.404 ************************************ 00:04:39.404 12:39:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.404 12:39:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.663 12:39:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:39.663 12:39:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:39.663 12:39:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.663 12:39:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.663 12:39:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.663 ************************************ 00:04:39.663 START TEST app_repeat 00:04:39.663 ************************************ 00:04:39.663 12:39:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:39.663 12:39:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.663 12:39:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.663 12:39:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:39.663 12:39:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.663 12:39:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:39.663 12:39:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:39.663 12:39:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:39.664 12:39:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58191 00:04:39.664 Process app_repeat pid: 58191 00:04:39.664 12:39:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:39.664 12:39:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.664 12:39:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58191' 00:04:39.664 12:39:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.664 spdk_app_start Round 0 00:04:39.664 12:39:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:39.664 12:39:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58191 /var/tmp/spdk-nbd.sock 00:04:39.664 12:39:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58191 ']' 00:04:39.664 12:39:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.664 12:39:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.664 12:39:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.664 12:39:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.664 12:39:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.664 [2024-11-15 12:39:48.133211] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:39.664 [2024-11-15 12:39:48.133319] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58191 ] 00:04:39.664 [2024-11-15 12:39:48.274749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.664 [2024-11-15 12:39:48.305675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.664 [2024-11-15 12:39:48.305689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.923 [2024-11-15 12:39:48.333792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:39.923 12:39:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.923 12:39:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:39.923 12:39:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.182 Malloc0 00:04:40.182 12:39:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.441 Malloc1 00:04:40.441 12:39:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.441 12:39:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.700 /dev/nbd0 00:04:40.700 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.700 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.700 1+0 records in 00:04:40.700 1+0 records out 00:04:40.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239107 s, 17.1 MB/s 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.700 12:39:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.700 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.700 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.700 12:39:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.974 /dev/nbd1 00:04:40.974 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.974 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.974 1+0 records in 00:04:40.974 1+0 records out 00:04:40.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026186 s, 15.6 MB/s 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.974 12:39:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.974 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.974 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.974 12:39:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.974 12:39:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.974 12:39:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.233 { 00:04:41.233 "nbd_device": "/dev/nbd0", 00:04:41.233 "bdev_name": "Malloc0" 00:04:41.233 }, 00:04:41.233 { 00:04:41.233 "nbd_device": "/dev/nbd1", 00:04:41.233 "bdev_name": "Malloc1" 00:04:41.233 } 00:04:41.233 ]' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.233 { 00:04:41.233 "nbd_device": "/dev/nbd0", 00:04:41.233 "bdev_name": "Malloc0" 00:04:41.233 }, 00:04:41.233 { 00:04:41.233 "nbd_device": "/dev/nbd1", 00:04:41.233 "bdev_name": "Malloc1" 00:04:41.233 } 00:04:41.233 ]' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.233 /dev/nbd1' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.233 /dev/nbd1' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.233 256+0 records in 00:04:41.233 256+0 records out 00:04:41.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105799 s, 99.1 MB/s 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.233 256+0 records in 00:04:41.233 256+0 records out 00:04:41.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227932 s, 46.0 MB/s 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.233 256+0 records in 00:04:41.233 256+0 records out 00:04:41.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278401 s, 37.7 MB/s 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.233 12:39:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.492 12:39:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.751 12:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.009 12:39:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.009 12:39:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.268 12:39:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.527 [2024-11-15 12:39:51.005131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.527 [2024-11-15 12:39:51.031707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.527 [2024-11-15 12:39:51.031717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.527 [2024-11-15 12:39:51.059773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.527 [2024-11-15 12:39:51.059878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.527 [2024-11-15 12:39:51.059891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.815 12:39:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.815 spdk_app_start Round 1 00:04:45.815 12:39:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.815 12:39:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58191 /var/tmp/spdk-nbd.sock 00:04:45.815 12:39:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58191 ']' 00:04:45.815 12:39:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.815 12:39:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.815 12:39:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.815 12:39:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.815 12:39:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.815 12:39:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.815 12:39:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:45.815 12:39:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.815 Malloc0 00:04:45.815 12:39:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.074 Malloc1 00:04:46.074 12:39:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.074 12:39:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.333 /dev/nbd0 00:04:46.333 12:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.333 12:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.333 1+0 records in 00:04:46.333 1+0 records out 00:04:46.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280397 s, 14.6 MB/s 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.333 12:39:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.333 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.333 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.333 12:39:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.591 /dev/nbd1 00:04:46.591 12:39:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.591 12:39:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.591 1+0 records in 00:04:46.591 1+0 records out 00:04:46.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285889 s, 14.3 MB/s 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.591 12:39:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.591 12:39:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.591 12:39:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.591 12:39:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.591 12:39:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.591 12:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.158 { 00:04:47.158 "nbd_device": "/dev/nbd0", 00:04:47.158 "bdev_name": "Malloc0" 00:04:47.158 }, 00:04:47.158 { 00:04:47.158 "nbd_device": "/dev/nbd1", 00:04:47.158 "bdev_name": "Malloc1" 00:04:47.158 } 00:04:47.158 ]' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.158 { 00:04:47.158 "nbd_device": "/dev/nbd0", 00:04:47.158 "bdev_name": "Malloc0" 00:04:47.158 }, 00:04:47.158 { 00:04:47.158 "nbd_device": "/dev/nbd1", 00:04:47.158 "bdev_name": "Malloc1" 00:04:47.158 } 00:04:47.158 ]' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.158 /dev/nbd1' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.158 /dev/nbd1' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.158 256+0 records in 00:04:47.158 256+0 records out 00:04:47.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105323 s, 99.6 MB/s 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.158 256+0 records in 00:04:47.158 256+0 records out 00:04:47.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184197 s, 56.9 MB/s 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.158 256+0 records in 00:04:47.158 256+0 records out 00:04:47.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259667 s, 40.4 MB/s 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.158 12:39:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.417 12:39:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.676 12:39:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.244 12:39:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.244 12:39:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.503 12:39:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.503 [2024-11-15 12:39:57.052138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.503 [2024-11-15 12:39:57.079287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.503 [2024-11-15 12:39:57.079297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.503 [2024-11-15 12:39:57.107799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:48.503 [2024-11-15 12:39:57.107908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.503 [2024-11-15 12:39:57.107921] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.790 12:39:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.790 spdk_app_start Round 2 00:04:51.790 12:39:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:51.790 12:39:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58191 /var/tmp/spdk-nbd.sock 00:04:51.790 12:39:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58191 ']' 00:04:51.790 12:39:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.790 12:39:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.790 12:39:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.790 12:39:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.790 12:39:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.790 12:40:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.790 12:40:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.790 12:40:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.049 Malloc0 00:04:52.049 12:40:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.308 Malloc1 00:04:52.308 12:40:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.308 12:40:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.596 /dev/nbd0 00:04:52.596 12:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.596 12:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.596 12:40:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:52.596 12:40:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.596 12:40:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.596 12:40:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.596 12:40:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.596 1+0 records in 00:04:52.596 1+0 records out 00:04:52.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026725 s, 15.3 MB/s 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.596 12:40:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.596 12:40:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.596 12:40:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.596 12:40:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.869 /dev/nbd1 00:04:52.869 12:40:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.869 12:40:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.869 1+0 records in 00:04:52.869 1+0 records out 00:04:52.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241814 s, 16.9 MB/s 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.869 12:40:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.869 12:40:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.869 12:40:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.869 12:40:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.869 12:40:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.869 12:40:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.127 { 00:04:53.127 "nbd_device": "/dev/nbd0", 00:04:53.127 "bdev_name": "Malloc0" 00:04:53.127 }, 00:04:53.127 { 00:04:53.127 "nbd_device": "/dev/nbd1", 00:04:53.127 "bdev_name": "Malloc1" 00:04:53.127 } 00:04:53.127 ]' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.127 { 00:04:53.127 "nbd_device": "/dev/nbd0", 00:04:53.127 "bdev_name": "Malloc0" 00:04:53.127 }, 00:04:53.127 { 00:04:53.127 "nbd_device": "/dev/nbd1", 00:04:53.127 "bdev_name": "Malloc1" 00:04:53.127 } 00:04:53.127 ]' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.127 /dev/nbd1' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.127 /dev/nbd1' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.127 256+0 records in 00:04:53.127 256+0 records out 00:04:53.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00784756 s, 134 MB/s 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.127 256+0 records in 00:04:53.127 256+0 records out 00:04:53.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224404 s, 46.7 MB/s 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.127 256+0 records in 00:04:53.127 256+0 records out 00:04:53.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275372 s, 38.1 MB/s 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.127 12:40:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.386 12:40:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.386 12:40:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.386 12:40:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.386 12:40:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.386 12:40:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.386 12:40:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.386 12:40:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.645 12:40:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.905 12:40:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.164 12:40:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.165 12:40:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.165 12:40:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.424 12:40:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.682 [2024-11-15 12:40:03.100364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.682 [2024-11-15 12:40:03.130131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.682 [2024-11-15 12:40:03.130142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.682 [2024-11-15 12:40:03.159205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:54.682 [2024-11-15 12:40:03.159313] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.682 [2024-11-15 12:40:03.159326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.969 12:40:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58191 /var/tmp/spdk-nbd.sock 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58191 ']' 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.969 12:40:06 event.app_repeat -- event/event.sh@39 -- # killprocess 58191 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58191 ']' 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58191 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58191 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.969 killing process with pid 58191 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58191' 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58191 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58191 00:04:57.969 spdk_app_start is called in Round 0. 00:04:57.969 Shutdown signal received, stop current app iteration 00:04:57.969 Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 reinitialization... 00:04:57.969 spdk_app_start is called in Round 1. 00:04:57.969 Shutdown signal received, stop current app iteration 00:04:57.969 Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 reinitialization... 00:04:57.969 spdk_app_start is called in Round 2. 00:04:57.969 Shutdown signal received, stop current app iteration 00:04:57.969 Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 reinitialization... 00:04:57.969 spdk_app_start is called in Round 3. 00:04:57.969 Shutdown signal received, stop current app iteration 00:04:57.969 12:40:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.969 12:40:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:57.969 00:04:57.969 real 0m18.340s 00:04:57.969 user 0m42.262s 00:04:57.969 sys 0m2.389s 00:04:57.969 12:40:06 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.969 ************************************ 00:04:57.969 END TEST app_repeat 00:04:57.969 ************************************ 00:04:57.970 12:40:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.970 12:40:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.970 12:40:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.970 12:40:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.970 12:40:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.970 12:40:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.970 ************************************ 00:04:57.970 START TEST cpu_locks 00:04:57.970 ************************************ 00:04:57.970 12:40:06 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.970 * Looking for test storage... 00:04:57.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.970 12:40:06 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.970 12:40:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.970 12:40:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.229 12:40:06 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.229 12:40:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:58.229 12:40:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.230 12:40:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.230 --rc genhtml_branch_coverage=1 00:04:58.230 --rc genhtml_function_coverage=1 00:04:58.230 --rc genhtml_legend=1 00:04:58.230 --rc geninfo_all_blocks=1 00:04:58.230 --rc geninfo_unexecuted_blocks=1 00:04:58.230 00:04:58.230 ' 00:04:58.230 12:40:06 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.230 --rc genhtml_branch_coverage=1 00:04:58.230 --rc genhtml_function_coverage=1 00:04:58.230 --rc genhtml_legend=1 00:04:58.230 --rc geninfo_all_blocks=1 00:04:58.230 --rc geninfo_unexecuted_blocks=1 00:04:58.230 00:04:58.230 ' 00:04:58.230 12:40:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.230 --rc genhtml_branch_coverage=1 00:04:58.230 --rc genhtml_function_coverage=1 00:04:58.230 --rc genhtml_legend=1 00:04:58.230 --rc geninfo_all_blocks=1 00:04:58.230 --rc geninfo_unexecuted_blocks=1 00:04:58.230 00:04:58.230 ' 00:04:58.230 12:40:06 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.230 --rc genhtml_branch_coverage=1 00:04:58.230 --rc genhtml_function_coverage=1 00:04:58.230 --rc genhtml_legend=1 00:04:58.230 --rc geninfo_all_blocks=1 00:04:58.230 --rc geninfo_unexecuted_blocks=1 00:04:58.230 00:04:58.230 ' 00:04:58.230 12:40:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:58.230 12:40:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:58.230 12:40:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:58.230 12:40:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:58.230 12:40:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.230 12:40:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.230 12:40:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.230 ************************************ 00:04:58.230 START TEST default_locks 00:04:58.230 ************************************ 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58625 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58625 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58625 ']' 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.230 12:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.230 [2024-11-15 12:40:06.744078] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:58.230 [2024-11-15 12:40:06.744215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58625 ] 00:04:58.230 [2024-11-15 12:40:06.889160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.488 [2024-11-15 12:40:06.919311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.488 [2024-11-15 12:40:06.955639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:59.423 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.423 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:59.423 12:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58625 00:04:59.423 12:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58625 00:04:59.423 12:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58625 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58625 ']' 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58625 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58625 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.682 killing process with pid 58625 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58625' 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58625 00:04:59.682 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58625 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58625 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58625 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58625 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58625 ']' 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.941 ERROR: process (pid: 58625) is no longer running 00:04:59.941 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58625) - No such process 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.941 00:04:59.941 real 0m1.783s 00:04:59.941 user 0m2.112s 00:04:59.941 sys 0m0.430s 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.941 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.941 ************************************ 00:04:59.941 END TEST default_locks 00:04:59.941 ************************************ 00:04:59.941 12:40:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.941 12:40:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.941 12:40:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.941 12:40:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.941 ************************************ 00:04:59.941 START TEST default_locks_via_rpc 00:04:59.941 ************************************ 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58671 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58671 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58671 ']' 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.941 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.941 [2024-11-15 12:40:08.567175] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:04:59.941 [2024-11-15 12:40:08.567266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58671 ] 00:05:00.200 [2024-11-15 12:40:08.706602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.200 [2024-11-15 12:40:08.736344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.200 [2024-11-15 12:40:08.776537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58671 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58671 00:05:00.459 12:40:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58671 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58671 ']' 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58671 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58671 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.718 killing process with pid 58671 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58671' 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58671 00:05:00.718 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58671 00:05:00.977 00:05:00.977 real 0m0.956s 00:05:00.977 user 0m1.004s 00:05:00.977 sys 0m0.370s 00:05:00.977 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.977 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.977 ************************************ 00:05:00.977 END TEST default_locks_via_rpc 00:05:00.977 ************************************ 00:05:00.977 12:40:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:00.977 12:40:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.977 12:40:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.977 12:40:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.977 ************************************ 00:05:00.977 START TEST non_locking_app_on_locked_coremask 00:05:00.977 ************************************ 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58709 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58709 /var/tmp/spdk.sock 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58709 ']' 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.977 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.977 [2024-11-15 12:40:09.590392] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:00.977 [2024-11-15 12:40:09.590493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58709 ] 00:05:01.236 [2024-11-15 12:40:09.733590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.236 [2024-11-15 12:40:09.762004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.236 [2024-11-15 12:40:09.799276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.495 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58723 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58723 /var/tmp/spdk2.sock 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58723 ']' 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.496 12:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.496 [2024-11-15 12:40:09.990414] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:01.496 [2024-11-15 12:40:09.990514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58723 ] 00:05:01.496 [2024-11-15 12:40:10.154340] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.496 [2024-11-15 12:40:10.154398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.755 [2024-11-15 12:40:10.227974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.755 [2024-11-15 12:40:10.312942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:02.323 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.323 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.323 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58709 00:05:02.323 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58709 00:05:02.323 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58709 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58709 ']' 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58709 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58709 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.891 killing process with pid 58709 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58709' 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58709 00:05:02.891 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58709 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58723 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58723 ']' 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58723 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58723 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.459 killing process with pid 58723 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58723' 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58723 00:05:03.459 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58723 00:05:03.718 00:05:03.718 real 0m2.683s 00:05:03.718 user 0m3.171s 00:05:03.718 sys 0m0.736s 00:05:03.718 12:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.718 12:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.718 ************************************ 00:05:03.718 END TEST non_locking_app_on_locked_coremask 00:05:03.718 ************************************ 00:05:03.718 12:40:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:03.718 12:40:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.718 12:40:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.718 12:40:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.718 ************************************ 00:05:03.718 START TEST locking_app_on_unlocked_coremask 00:05:03.718 ************************************ 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58779 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58779 /var/tmp/spdk.sock 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58779 ']' 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.718 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.718 [2024-11-15 12:40:12.325873] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:03.718 [2024-11-15 12:40:12.326425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58779 ] 00:05:03.977 [2024-11-15 12:40:12.470625] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.977 [2024-11-15 12:40:12.470671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.977 [2024-11-15 12:40:12.499176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.977 [2024-11-15 12:40:12.536196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58782 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58782 /var/tmp/spdk2.sock 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58782 ']' 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.235 12:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.235 [2024-11-15 12:40:12.741876] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:04.235 [2024-11-15 12:40:12.742052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58782 ] 00:05:04.494 [2024-11-15 12:40:12.904031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.494 [2024-11-15 12:40:12.965056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.494 [2024-11-15 12:40:13.043667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:05.061 12:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.061 12:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.061 12:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58782 00:05:05.061 12:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58782 00:05:05.061 12:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58779 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58779 ']' 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58779 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58779 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.997 killing process with pid 58779 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58779' 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58779 00:05:05.997 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58779 00:05:06.564 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58782 00:05:06.564 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58782 ']' 00:05:06.564 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58782 00:05:06.564 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.564 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.564 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58782 00:05:06.564 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.564 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.564 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58782' 00:05:06.564 killing process with pid 58782 00:05:06.564 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58782 00:05:06.564 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58782 00:05:06.822 00:05:06.822 real 0m3.000s 00:05:06.822 user 0m3.574s 00:05:06.822 sys 0m0.845s 00:05:06.822 ************************************ 00:05:06.822 END TEST locking_app_on_unlocked_coremask 00:05:06.822 ************************************ 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.822 12:40:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:06.822 12:40:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.822 12:40:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.822 12:40:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.822 ************************************ 00:05:06.822 START TEST locking_app_on_locked_coremask 00:05:06.822 ************************************ 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58849 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58849 /var/tmp/spdk.sock 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58849 ']' 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.822 12:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.822 [2024-11-15 12:40:15.378272] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:06.822 [2024-11-15 12:40:15.378405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58849 ] 00:05:07.081 [2024-11-15 12:40:15.521829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.081 [2024-11-15 12:40:15.554112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.081 [2024-11-15 12:40:15.596230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58865 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58865 /var/tmp/spdk2.sock 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58865 /var/tmp/spdk2.sock 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58865 /var/tmp/spdk2.sock 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58865 ']' 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.019 12:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.019 [2024-11-15 12:40:16.394884] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:08.019 [2024-11-15 12:40:16.395003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58865 ] 00:05:08.019 [2024-11-15 12:40:16.550475] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58849 has claimed it. 00:05:08.019 [2024-11-15 12:40:16.550541] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.587 ERROR: process (pid: 58865) is no longer running 00:05:08.587 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58865) - No such process 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58849 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58849 00:05:08.587 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58849 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58849 ']' 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58849 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58849 00:05:09.155 killing process with pid 58849 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58849' 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58849 00:05:09.155 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58849 00:05:09.416 00:05:09.416 real 0m2.514s 00:05:09.416 user 0m3.062s 00:05:09.416 sys 0m0.516s 00:05:09.416 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.416 ************************************ 00:05:09.416 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 END TEST locking_app_on_locked_coremask 00:05:09.416 ************************************ 00:05:09.416 12:40:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.416 12:40:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.416 12:40:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.416 12:40:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 ************************************ 00:05:09.416 START TEST locking_overlapped_coremask 00:05:09.416 ************************************ 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:09.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58911 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58911 /var/tmp/spdk.sock 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58911 ']' 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.416 12:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 [2024-11-15 12:40:17.941927] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:09.416 [2024-11-15 12:40:17.942044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:05:09.687 [2024-11-15 12:40:18.090657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.687 [2024-11-15 12:40:18.126467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.687 [2024-11-15 12:40:18.126587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.687 [2024-11-15 12:40:18.126591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.687 [2024-11-15 12:40:18.168342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58921 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58921 /var/tmp/spdk2.sock 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58921 /var/tmp/spdk2.sock 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:09.687 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58921 /var/tmp/spdk2.sock 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58921 ']' 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.688 12:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.958 [2024-11-15 12:40:18.360411] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:09.958 [2024-11-15 12:40:18.360522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:05:09.958 [2024-11-15 12:40:18.522943] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58911 has claimed it. 00:05:09.958 [2024-11-15 12:40:18.523036] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.525 ERROR: process (pid: 58921) is no longer running 00:05:10.525 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58921) - No such process 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58911 00:05:10.525 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58911 ']' 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58911 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58911 00:05:10.526 killing process with pid 58911 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58911' 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58911 00:05:10.526 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58911 00:05:10.785 00:05:10.785 real 0m1.480s 00:05:10.785 user 0m4.103s 00:05:10.785 sys 0m0.303s 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.785 ************************************ 00:05:10.785 END TEST locking_overlapped_coremask 00:05:10.785 ************************************ 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.785 12:40:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:10.785 12:40:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.785 12:40:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.785 12:40:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.785 ************************************ 00:05:10.785 START TEST locking_overlapped_coremask_via_rpc 00:05:10.785 ************************************ 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58961 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58961 /var/tmp/spdk.sock 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.785 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.785 [2024-11-15 12:40:19.448359] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:10.785 [2024-11-15 12:40:19.448443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:05:11.044 [2024-11-15 12:40:19.584824] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.045 [2024-11-15 12:40:19.585042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.045 [2024-11-15 12:40:19.615950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.045 [2024-11-15 12:40:19.616063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.045 [2024-11-15 12:40:19.616069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.045 [2024-11-15 12:40:19.655005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58966 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58966 /var/tmp/spdk2.sock 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58966 ']' 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.304 12:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.304 [2024-11-15 12:40:19.846493] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:11.304 [2024-11-15 12:40:19.846783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:05:11.563 [2024-11-15 12:40:20.005082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.563 [2024-11-15 12:40:20.005119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.563 [2024-11-15 12:40:20.070857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.563 [2024-11-15 12:40:20.070918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.563 [2024-11-15 12:40:20.070920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:11.563 [2024-11-15 12:40:20.146547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.499 [2024-11-15 12:40:20.820758] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58961 has claimed it. 00:05:12.499 request: 00:05:12.499 { 00:05:12.499 "method": "framework_enable_cpumask_locks", 00:05:12.499 "req_id": 1 00:05:12.499 } 00:05:12.499 Got JSON-RPC error response 00:05:12.499 response: 00:05:12.499 { 00:05:12.499 "code": -32603, 00:05:12.499 "message": "Failed to claim CPU core: 2" 00:05:12.499 } 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58961 /var/tmp/spdk.sock 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.499 12:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58966 /var/tmp/spdk2.sock 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58966 ']' 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.499 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.759 ************************************ 00:05:12.759 END TEST locking_overlapped_coremask_via_rpc 00:05:12.759 ************************************ 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.759 00:05:12.759 real 0m1.912s 00:05:12.759 user 0m1.133s 00:05:12.759 sys 0m0.135s 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.759 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.759 12:40:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:12.759 12:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58961 ]] 00:05:12.759 12:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58961 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58961 ']' 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58961 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58961 00:05:12.759 killing process with pid 58961 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58961' 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58961 00:05:12.759 12:40:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58961 00:05:13.018 12:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58966 ]] 00:05:13.018 12:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58966 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58966 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58966 00:05:13.018 killing process with pid 58966 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58966' 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58966 00:05:13.018 12:40:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58966 00:05:13.277 12:40:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.277 Process with pid 58961 is not found 00:05:13.277 Process with pid 58966 is not found 00:05:13.277 12:40:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:13.277 12:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58961 ]] 00:05:13.277 12:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58961 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58961 ']' 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58961 00:05:13.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58961) - No such process 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58961 is not found' 00:05:13.277 12:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58966 ]] 00:05:13.277 12:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58966 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58966 00:05:13.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58966) - No such process 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58966 is not found' 00:05:13.277 12:40:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.277 ************************************ 00:05:13.277 END TEST cpu_locks 00:05:13.277 ************************************ 00:05:13.277 00:05:13.277 real 0m15.377s 00:05:13.277 user 0m27.860s 00:05:13.277 sys 0m3.994s 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.277 12:40:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 ************************************ 00:05:13.277 END TEST event 00:05:13.277 ************************************ 00:05:13.277 00:05:13.277 real 0m41.700s 00:05:13.278 user 1m22.343s 00:05:13.278 sys 0m7.020s 00:05:13.278 12:40:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.278 12:40:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.537 12:40:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:13.537 12:40:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.537 12:40:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.537 12:40:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.537 ************************************ 00:05:13.537 START TEST thread 00:05:13.537 ************************************ 00:05:13.537 12:40:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:13.537 * Looking for test storage... 00:05:13.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:13.537 12:40:22 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.537 12:40:22 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.537 12:40:22 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.537 12:40:22 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.537 12:40:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.537 12:40:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.537 12:40:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.537 12:40:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.537 12:40:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.538 12:40:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.538 12:40:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.538 12:40:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.538 12:40:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.538 12:40:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.538 12:40:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.538 12:40:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:13.538 12:40:22 thread -- scripts/common.sh@345 -- # : 1 00:05:13.538 12:40:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.538 12:40:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.538 12:40:22 thread -- scripts/common.sh@365 -- # decimal 1 00:05:13.538 12:40:22 thread -- scripts/common.sh@353 -- # local d=1 00:05:13.538 12:40:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.538 12:40:22 thread -- scripts/common.sh@355 -- # echo 1 00:05:13.538 12:40:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.538 12:40:22 thread -- scripts/common.sh@366 -- # decimal 2 00:05:13.538 12:40:22 thread -- scripts/common.sh@353 -- # local d=2 00:05:13.538 12:40:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.538 12:40:22 thread -- scripts/common.sh@355 -- # echo 2 00:05:13.538 12:40:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.538 12:40:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.538 12:40:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.538 12:40:22 thread -- scripts/common.sh@368 -- # return 0 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.538 --rc genhtml_branch_coverage=1 00:05:13.538 --rc genhtml_function_coverage=1 00:05:13.538 --rc genhtml_legend=1 00:05:13.538 --rc geninfo_all_blocks=1 00:05:13.538 --rc geninfo_unexecuted_blocks=1 00:05:13.538 00:05:13.538 ' 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.538 --rc genhtml_branch_coverage=1 00:05:13.538 --rc genhtml_function_coverage=1 00:05:13.538 --rc genhtml_legend=1 00:05:13.538 --rc geninfo_all_blocks=1 00:05:13.538 --rc geninfo_unexecuted_blocks=1 00:05:13.538 00:05:13.538 ' 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.538 --rc genhtml_branch_coverage=1 00:05:13.538 --rc genhtml_function_coverage=1 00:05:13.538 --rc genhtml_legend=1 00:05:13.538 --rc geninfo_all_blocks=1 00:05:13.538 --rc geninfo_unexecuted_blocks=1 00:05:13.538 00:05:13.538 ' 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.538 --rc genhtml_branch_coverage=1 00:05:13.538 --rc genhtml_function_coverage=1 00:05:13.538 --rc genhtml_legend=1 00:05:13.538 --rc geninfo_all_blocks=1 00:05:13.538 --rc geninfo_unexecuted_blocks=1 00:05:13.538 00:05:13.538 ' 00:05:13.538 12:40:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.538 12:40:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.538 ************************************ 00:05:13.538 START TEST thread_poller_perf 00:05:13.538 ************************************ 00:05:13.538 12:40:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.538 [2024-11-15 12:40:22.155396] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:13.538 [2024-11-15 12:40:22.155667] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59097 ] 00:05:13.798 [2024-11-15 12:40:22.302963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.798 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:13.798 [2024-11-15 12:40:22.330474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.735 [2024-11-15T12:40:23.405Z] ====================================== 00:05:14.735 [2024-11-15T12:40:23.405Z] busy:2208782536 (cyc) 00:05:14.735 [2024-11-15T12:40:23.405Z] total_run_count: 390000 00:05:14.735 [2024-11-15T12:40:23.405Z] tsc_hz: 2200000000 (cyc) 00:05:14.735 [2024-11-15T12:40:23.405Z] ====================================== 00:05:14.735 [2024-11-15T12:40:23.405Z] poller_cost: 5663 (cyc), 2574 (nsec) 00:05:14.735 00:05:14.735 real 0m1.236s 00:05:14.735 user 0m1.096s 00:05:14.735 sys 0m0.035s 00:05:14.735 12:40:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.735 ************************************ 00:05:14.735 END TEST thread_poller_perf 00:05:14.735 ************************************ 00:05:14.735 12:40:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.994 12:40:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.994 12:40:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:14.994 12:40:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.994 12:40:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.994 ************************************ 00:05:14.994 START TEST thread_poller_perf 00:05:14.994 ************************************ 00:05:14.994 12:40:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.994 [2024-11-15 12:40:23.443873] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:14.994 [2024-11-15 12:40:23.443988] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59131 ] 00:05:14.994 [2024-11-15 12:40:23.588184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.994 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:14.994 [2024-11-15 12:40:23.615535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.374 [2024-11-15T12:40:25.044Z] ====================================== 00:05:16.374 [2024-11-15T12:40:25.044Z] busy:2201816548 (cyc) 00:05:16.374 [2024-11-15T12:40:25.044Z] total_run_count: 5043000 00:05:16.374 [2024-11-15T12:40:25.044Z] tsc_hz: 2200000000 (cyc) 00:05:16.374 [2024-11-15T12:40:25.044Z] ====================================== 00:05:16.374 [2024-11-15T12:40:25.044Z] poller_cost: 436 (cyc), 198 (nsec) 00:05:16.374 00:05:16.374 real 0m1.227s 00:05:16.374 user 0m1.081s 00:05:16.374 sys 0m0.040s 00:05:16.374 12:40:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.374 ************************************ 00:05:16.374 END TEST thread_poller_perf 00:05:16.374 ************************************ 00:05:16.374 12:40:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.374 12:40:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.374 ************************************ 00:05:16.374 END TEST thread 00:05:16.374 ************************************ 00:05:16.374 00:05:16.374 real 0m2.737s 00:05:16.374 user 0m2.309s 00:05:16.374 sys 0m0.212s 00:05:16.374 12:40:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.374 12:40:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.374 12:40:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:16.374 12:40:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:16.374 12:40:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.374 12:40:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.374 12:40:24 -- common/autotest_common.sh@10 -- # set +x 00:05:16.374 ************************************ 00:05:16.374 START TEST app_cmdline 00:05:16.374 ************************************ 00:05:16.374 12:40:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:16.374 * Looking for test storage... 00:05:16.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:16.374 12:40:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.374 12:40:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.374 12:40:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.374 12:40:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:16.374 12:40:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.375 12:40:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:16.375 12:40:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.375 12:40:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.375 12:40:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.375 12:40:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.375 --rc genhtml_branch_coverage=1 00:05:16.375 --rc genhtml_function_coverage=1 00:05:16.375 --rc genhtml_legend=1 00:05:16.375 --rc geninfo_all_blocks=1 00:05:16.375 --rc geninfo_unexecuted_blocks=1 00:05:16.375 00:05:16.375 ' 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.375 --rc genhtml_branch_coverage=1 00:05:16.375 --rc genhtml_function_coverage=1 00:05:16.375 --rc genhtml_legend=1 00:05:16.375 --rc geninfo_all_blocks=1 00:05:16.375 --rc geninfo_unexecuted_blocks=1 00:05:16.375 00:05:16.375 ' 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.375 --rc genhtml_branch_coverage=1 00:05:16.375 --rc genhtml_function_coverage=1 00:05:16.375 --rc genhtml_legend=1 00:05:16.375 --rc geninfo_all_blocks=1 00:05:16.375 --rc geninfo_unexecuted_blocks=1 00:05:16.375 00:05:16.375 ' 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.375 --rc genhtml_branch_coverage=1 00:05:16.375 --rc genhtml_function_coverage=1 00:05:16.375 --rc genhtml_legend=1 00:05:16.375 --rc geninfo_all_blocks=1 00:05:16.375 --rc geninfo_unexecuted_blocks=1 00:05:16.375 00:05:16.375 ' 00:05:16.375 12:40:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:16.375 12:40:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59209 00:05:16.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.375 12:40:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59209 00:05:16.375 12:40:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59209 ']' 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.375 12:40:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.375 [2024-11-15 12:40:24.997278] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:16.375 [2024-11-15 12:40:24.997582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59209 ] 00:05:16.634 [2024-11-15 12:40:25.142393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.634 [2024-11-15 12:40:25.170882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.634 [2024-11-15 12:40:25.207204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.893 12:40:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.893 12:40:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:16.893 12:40:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:17.152 { 00:05:17.152 "version": "SPDK v25.01-pre git sha1 d2671b4b7", 00:05:17.152 "fields": { 00:05:17.152 "major": 25, 00:05:17.152 "minor": 1, 00:05:17.152 "patch": 0, 00:05:17.152 "suffix": "-pre", 00:05:17.152 "commit": "d2671b4b7" 00:05:17.152 } 00:05:17.152 } 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:17.152 12:40:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:17.152 12:40:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.412 request: 00:05:17.412 { 00:05:17.412 "method": "env_dpdk_get_mem_stats", 00:05:17.412 "req_id": 1 00:05:17.412 } 00:05:17.412 Got JSON-RPC error response 00:05:17.412 response: 00:05:17.412 { 00:05:17.412 "code": -32601, 00:05:17.412 "message": "Method not found" 00:05:17.412 } 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.412 12:40:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59209 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59209 ']' 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59209 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59209 00:05:17.412 killing process with pid 59209 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59209' 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 59209 00:05:17.412 12:40:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 59209 00:05:17.671 00:05:17.671 real 0m1.430s 00:05:17.671 user 0m1.870s 00:05:17.671 sys 0m0.321s 00:05:17.671 12:40:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.671 ************************************ 00:05:17.671 12:40:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.671 END TEST app_cmdline 00:05:17.671 ************************************ 00:05:17.671 12:40:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:17.671 12:40:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.671 12:40:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.671 12:40:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.671 ************************************ 00:05:17.671 START TEST version 00:05:17.671 ************************************ 00:05:17.671 12:40:26 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:17.671 * Looking for test storage... 00:05:17.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:17.671 12:40:26 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.671 12:40:26 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.671 12:40:26 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.932 12:40:26 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.932 12:40:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.932 12:40:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.932 12:40:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.932 12:40:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.932 12:40:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.932 12:40:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.932 12:40:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.932 12:40:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.932 12:40:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.932 12:40:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.932 12:40:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.932 12:40:26 version -- scripts/common.sh@344 -- # case "$op" in 00:05:17.932 12:40:26 version -- scripts/common.sh@345 -- # : 1 00:05:17.932 12:40:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.932 12:40:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.932 12:40:26 version -- scripts/common.sh@365 -- # decimal 1 00:05:17.932 12:40:26 version -- scripts/common.sh@353 -- # local d=1 00:05:17.932 12:40:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.932 12:40:26 version -- scripts/common.sh@355 -- # echo 1 00:05:17.932 12:40:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.932 12:40:26 version -- scripts/common.sh@366 -- # decimal 2 00:05:17.932 12:40:26 version -- scripts/common.sh@353 -- # local d=2 00:05:17.932 12:40:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.932 12:40:26 version -- scripts/common.sh@355 -- # echo 2 00:05:17.932 12:40:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.932 12:40:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.932 12:40:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.932 12:40:26 version -- scripts/common.sh@368 -- # return 0 00:05:17.932 12:40:26 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.932 12:40:26 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.932 --rc genhtml_branch_coverage=1 00:05:17.932 --rc genhtml_function_coverage=1 00:05:17.932 --rc genhtml_legend=1 00:05:17.932 --rc geninfo_all_blocks=1 00:05:17.932 --rc geninfo_unexecuted_blocks=1 00:05:17.932 00:05:17.932 ' 00:05:17.932 12:40:26 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.932 --rc genhtml_branch_coverage=1 00:05:17.932 --rc genhtml_function_coverage=1 00:05:17.932 --rc genhtml_legend=1 00:05:17.932 --rc geninfo_all_blocks=1 00:05:17.932 --rc geninfo_unexecuted_blocks=1 00:05:17.932 00:05:17.932 ' 00:05:17.932 12:40:26 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.932 --rc genhtml_branch_coverage=1 00:05:17.932 --rc genhtml_function_coverage=1 00:05:17.932 --rc genhtml_legend=1 00:05:17.932 --rc geninfo_all_blocks=1 00:05:17.932 --rc geninfo_unexecuted_blocks=1 00:05:17.932 00:05:17.932 ' 00:05:17.932 12:40:26 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.932 --rc genhtml_branch_coverage=1 00:05:17.932 --rc genhtml_function_coverage=1 00:05:17.932 --rc genhtml_legend=1 00:05:17.932 --rc geninfo_all_blocks=1 00:05:17.932 --rc geninfo_unexecuted_blocks=1 00:05:17.932 00:05:17.932 ' 00:05:17.932 12:40:26 version -- app/version.sh@17 -- # get_header_version major 00:05:17.932 12:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # cut -f2 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:17.932 12:40:26 version -- app/version.sh@17 -- # major=25 00:05:17.932 12:40:26 version -- app/version.sh@18 -- # get_header_version minor 00:05:17.932 12:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # cut -f2 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:17.932 12:40:26 version -- app/version.sh@18 -- # minor=1 00:05:17.932 12:40:26 version -- app/version.sh@19 -- # get_header_version patch 00:05:17.932 12:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # cut -f2 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:17.932 12:40:26 version -- app/version.sh@19 -- # patch=0 00:05:17.932 12:40:26 version -- app/version.sh@20 -- # get_header_version suffix 00:05:17.932 12:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # cut -f2 00:05:17.932 12:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:17.932 12:40:26 version -- app/version.sh@20 -- # suffix=-pre 00:05:17.932 12:40:26 version -- app/version.sh@22 -- # version=25.1 00:05:17.933 12:40:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:17.933 12:40:26 version -- app/version.sh@28 -- # version=25.1rc0 00:05:17.933 12:40:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:17.933 12:40:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:17.933 12:40:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:17.933 12:40:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:17.933 00:05:17.933 real 0m0.240s 00:05:17.933 user 0m0.163s 00:05:17.933 sys 0m0.110s 00:05:17.933 12:40:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.933 ************************************ 00:05:17.933 END TEST version 00:05:17.933 ************************************ 00:05:17.933 12:40:26 version -- common/autotest_common.sh@10 -- # set +x 00:05:17.933 12:40:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:17.933 12:40:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:17.933 12:40:26 -- spdk/autotest.sh@194 -- # uname -s 00:05:17.933 12:40:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:17.933 12:40:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:17.933 12:40:26 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:17.933 12:40:26 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:17.933 12:40:26 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:17.933 12:40:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.933 12:40:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.933 12:40:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.933 ************************************ 00:05:17.933 START TEST spdk_dd 00:05:17.933 ************************************ 00:05:17.933 12:40:26 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:17.933 * Looking for test storage... 00:05:18.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.193 --rc genhtml_branch_coverage=1 00:05:18.193 --rc genhtml_function_coverage=1 00:05:18.193 --rc genhtml_legend=1 00:05:18.193 --rc geninfo_all_blocks=1 00:05:18.193 --rc geninfo_unexecuted_blocks=1 00:05:18.193 00:05:18.193 ' 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.193 --rc genhtml_branch_coverage=1 00:05:18.193 --rc genhtml_function_coverage=1 00:05:18.193 --rc genhtml_legend=1 00:05:18.193 --rc geninfo_all_blocks=1 00:05:18.193 --rc geninfo_unexecuted_blocks=1 00:05:18.193 00:05:18.193 ' 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.193 --rc genhtml_branch_coverage=1 00:05:18.193 --rc genhtml_function_coverage=1 00:05:18.193 --rc genhtml_legend=1 00:05:18.193 --rc geninfo_all_blocks=1 00:05:18.193 --rc geninfo_unexecuted_blocks=1 00:05:18.193 00:05:18.193 ' 00:05:18.193 12:40:26 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.193 --rc genhtml_branch_coverage=1 00:05:18.193 --rc genhtml_function_coverage=1 00:05:18.193 --rc genhtml_legend=1 00:05:18.193 --rc geninfo_all_blocks=1 00:05:18.193 --rc geninfo_unexecuted_blocks=1 00:05:18.193 00:05:18.193 ' 00:05:18.193 12:40:26 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.193 12:40:26 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.193 12:40:26 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.193 12:40:26 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.193 12:40:26 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.193 12:40:26 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:18.193 12:40:26 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.193 12:40:26 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.453 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.453 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.453 12:40:27 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:18.453 12:40:27 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:18.453 12:40:27 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:18.714 12:40:27 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:18.714 12:40:27 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.714 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:18.715 * spdk_dd linked to liburing 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:18.715 12:40:27 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:18.715 12:40:27 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:18.716 12:40:27 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:18.716 12:40:27 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:18.716 12:40:27 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:18.716 12:40:27 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:18.716 12:40:27 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:18.716 12:40:27 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:18.716 12:40:27 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:18.716 12:40:27 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:18.716 12:40:27 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.716 12:40:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:18.716 ************************************ 00:05:18.716 START TEST spdk_dd_basic_rw 00:05:18.716 ************************************ 00:05:18.716 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:18.716 * Looking for test storage... 00:05:18.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:18.716 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.716 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.716 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.977 --rc genhtml_branch_coverage=1 00:05:18.977 --rc genhtml_function_coverage=1 00:05:18.977 --rc genhtml_legend=1 00:05:18.977 --rc geninfo_all_blocks=1 00:05:18.977 --rc geninfo_unexecuted_blocks=1 00:05:18.977 00:05:18.977 ' 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.977 --rc genhtml_branch_coverage=1 00:05:18.977 --rc genhtml_function_coverage=1 00:05:18.977 --rc genhtml_legend=1 00:05:18.977 --rc geninfo_all_blocks=1 00:05:18.977 --rc geninfo_unexecuted_blocks=1 00:05:18.977 00:05:18.977 ' 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.977 --rc genhtml_branch_coverage=1 00:05:18.977 --rc genhtml_function_coverage=1 00:05:18.977 --rc genhtml_legend=1 00:05:18.977 --rc geninfo_all_blocks=1 00:05:18.977 --rc geninfo_unexecuted_blocks=1 00:05:18.977 00:05:18.977 ' 00:05:18.977 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.977 --rc genhtml_branch_coverage=1 00:05:18.977 --rc genhtml_function_coverage=1 00:05:18.978 --rc genhtml_legend=1 00:05:18.978 --rc geninfo_all_blocks=1 00:05:18.978 --rc geninfo_unexecuted_blocks=1 00:05:18.978 00:05:18.978 ' 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:18.978 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:18.979 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:18.980 ************************************ 00:05:18.980 START TEST dd_bs_lt_native_bs 00:05:18.980 ************************************ 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:18.980 12:40:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:19.239 { 00:05:19.239 "subsystems": [ 00:05:19.239 { 00:05:19.239 "subsystem": "bdev", 00:05:19.239 "config": [ 00:05:19.239 { 00:05:19.239 "params": { 00:05:19.239 "trtype": "pcie", 00:05:19.239 "traddr": "0000:00:10.0", 00:05:19.239 "name": "Nvme0" 00:05:19.239 }, 00:05:19.239 "method": "bdev_nvme_attach_controller" 00:05:19.239 }, 00:05:19.239 { 00:05:19.239 "method": "bdev_wait_for_examine" 00:05:19.239 } 00:05:19.239 ] 00:05:19.239 } 00:05:19.239 ] 00:05:19.239 } 00:05:19.239 [2024-11-15 12:40:27.667354] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:19.239 [2024-11-15 12:40:27.667454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59552 ] 00:05:19.239 [2024-11-15 12:40:27.819815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.239 [2024-11-15 12:40:27.858897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.239 [2024-11-15 12:40:27.893121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.498 [2024-11-15 12:40:27.988331] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:19.498 [2024-11-15 12:40:27.988415] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:19.498 [2024-11-15 12:40:28.059238] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.498 00:05:19.498 real 0m0.505s 00:05:19.498 user 0m0.349s 00:05:19.498 sys 0m0.111s 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:19.498 ************************************ 00:05:19.498 END TEST dd_bs_lt_native_bs 00:05:19.498 ************************************ 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:19.498 ************************************ 00:05:19.498 START TEST dd_rw 00:05:19.498 ************************************ 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:19.498 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:20.066 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:20.066 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:20.066 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:20.066 12:40:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:20.066 [2024-11-15 12:40:28.691050] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:20.066 [2024-11-15 12:40:28.691150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:05:20.066 { 00:05:20.066 "subsystems": [ 00:05:20.066 { 00:05:20.066 "subsystem": "bdev", 00:05:20.066 "config": [ 00:05:20.066 { 00:05:20.066 "params": { 00:05:20.066 "trtype": "pcie", 00:05:20.066 "traddr": "0000:00:10.0", 00:05:20.066 "name": "Nvme0" 00:05:20.066 }, 00:05:20.066 "method": "bdev_nvme_attach_controller" 00:05:20.066 }, 00:05:20.066 { 00:05:20.066 "method": "bdev_wait_for_examine" 00:05:20.066 } 00:05:20.066 ] 00:05:20.066 } 00:05:20.066 ] 00:05:20.066 } 00:05:20.325 [2024-11-15 12:40:28.837328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.325 [2024-11-15 12:40:28.865480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.325 [2024-11-15 12:40:28.893032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.325  [2024-11-15T12:40:29.255Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:20.585 00:05:20.585 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:20.585 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:20.585 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:20.585 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:20.585 { 00:05:20.585 "subsystems": [ 00:05:20.585 { 00:05:20.585 "subsystem": "bdev", 00:05:20.585 "config": [ 00:05:20.585 { 00:05:20.585 "params": { 00:05:20.585 "trtype": "pcie", 00:05:20.585 "traddr": "0000:00:10.0", 00:05:20.585 "name": "Nvme0" 00:05:20.585 }, 00:05:20.585 "method": "bdev_nvme_attach_controller" 00:05:20.585 }, 00:05:20.585 { 00:05:20.585 "method": "bdev_wait_for_examine" 00:05:20.585 } 00:05:20.585 ] 00:05:20.585 } 00:05:20.585 ] 00:05:20.585 } 00:05:20.585 [2024-11-15 12:40:29.160128] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:20.585 [2024-11-15 12:40:29.160226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:05:20.845 [2024-11-15 12:40:29.305604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.845 [2024-11-15 12:40:29.333018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.845 [2024-11-15 12:40:29.360642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.845  [2024-11-15T12:40:29.775Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:21.105 00:05:21.105 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:21.105 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:21.105 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:21.105 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:21.105 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:21.105 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:21.105 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:21.106 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:21.106 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:21.106 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:21.106 12:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:21.106 [2024-11-15 12:40:29.642789] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:21.106 [2024-11-15 12:40:29.642878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:05:21.106 { 00:05:21.106 "subsystems": [ 00:05:21.106 { 00:05:21.106 "subsystem": "bdev", 00:05:21.106 "config": [ 00:05:21.106 { 00:05:21.106 "params": { 00:05:21.106 "trtype": "pcie", 00:05:21.106 "traddr": "0000:00:10.0", 00:05:21.106 "name": "Nvme0" 00:05:21.106 }, 00:05:21.106 "method": "bdev_nvme_attach_controller" 00:05:21.106 }, 00:05:21.106 { 00:05:21.106 "method": "bdev_wait_for_examine" 00:05:21.106 } 00:05:21.106 ] 00:05:21.106 } 00:05:21.106 ] 00:05:21.106 } 00:05:21.365 [2024-11-15 12:40:29.790330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.365 [2024-11-15 12:40:29.817621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.365 [2024-11-15 12:40:29.845287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.365  [2024-11-15T12:40:30.294Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:21.624 00:05:21.624 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:21.624 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:21.624 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:21.624 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:21.624 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:21.624 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:21.624 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.192 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:22.192 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:22.192 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:22.192 12:40:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.193 [2024-11-15 12:40:30.644039] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:22.193 [2024-11-15 12:40:30.644122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59627 ] 00:05:22.193 { 00:05:22.193 "subsystems": [ 00:05:22.193 { 00:05:22.193 "subsystem": "bdev", 00:05:22.193 "config": [ 00:05:22.193 { 00:05:22.193 "params": { 00:05:22.193 "trtype": "pcie", 00:05:22.193 "traddr": "0000:00:10.0", 00:05:22.193 "name": "Nvme0" 00:05:22.193 }, 00:05:22.193 "method": "bdev_nvme_attach_controller" 00:05:22.193 }, 00:05:22.193 { 00:05:22.193 "method": "bdev_wait_for_examine" 00:05:22.193 } 00:05:22.193 ] 00:05:22.193 } 00:05:22.193 ] 00:05:22.193 } 00:05:22.193 [2024-11-15 12:40:30.783709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.193 [2024-11-15 12:40:30.811106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.193 [2024-11-15 12:40:30.838799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.452  [2024-11-15T12:40:31.122Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:22.452 00:05:22.452 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:22.452 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:22.452 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:22.452 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.452 [2024-11-15 12:40:31.100170] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:22.452 [2024-11-15 12:40:31.100273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59640 ] 00:05:22.452 { 00:05:22.452 "subsystems": [ 00:05:22.452 { 00:05:22.452 "subsystem": "bdev", 00:05:22.452 "config": [ 00:05:22.452 { 00:05:22.452 "params": { 00:05:22.452 "trtype": "pcie", 00:05:22.452 "traddr": "0000:00:10.0", 00:05:22.452 "name": "Nvme0" 00:05:22.452 }, 00:05:22.452 "method": "bdev_nvme_attach_controller" 00:05:22.452 }, 00:05:22.452 { 00:05:22.452 "method": "bdev_wait_for_examine" 00:05:22.452 } 00:05:22.452 ] 00:05:22.452 } 00:05:22.452 ] 00:05:22.452 } 00:05:22.711 [2024-11-15 12:40:31.244848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.712 [2024-11-15 12:40:31.273801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.712 [2024-11-15 12:40:31.301160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.971  [2024-11-15T12:40:31.641Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:22.971 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:22.971 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.971 { 00:05:22.971 "subsystems": [ 00:05:22.971 { 00:05:22.971 "subsystem": "bdev", 00:05:22.971 "config": [ 00:05:22.971 { 00:05:22.971 "params": { 00:05:22.971 "trtype": "pcie", 00:05:22.971 "traddr": "0000:00:10.0", 00:05:22.971 "name": "Nvme0" 00:05:22.971 }, 00:05:22.971 "method": "bdev_nvme_attach_controller" 00:05:22.971 }, 00:05:22.971 { 00:05:22.971 "method": "bdev_wait_for_examine" 00:05:22.971 } 00:05:22.971 ] 00:05:22.971 } 00:05:22.971 ] 00:05:22.971 } 00:05:22.971 [2024-11-15 12:40:31.571620] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:22.971 [2024-11-15 12:40:31.571720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59656 ] 00:05:23.231 [2024-11-15 12:40:31.715722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.231 [2024-11-15 12:40:31.743314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.231 [2024-11-15 12:40:31.771496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.231  [2024-11-15T12:40:32.159Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:23.489 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:23.489 12:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.061 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:24.061 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:24.061 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:24.061 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.061 [2024-11-15 12:40:32.528862] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:24.061 [2024-11-15 12:40:32.528965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59675 ] 00:05:24.061 { 00:05:24.061 "subsystems": [ 00:05:24.061 { 00:05:24.061 "subsystem": "bdev", 00:05:24.061 "config": [ 00:05:24.061 { 00:05:24.061 "params": { 00:05:24.061 "trtype": "pcie", 00:05:24.061 "traddr": "0000:00:10.0", 00:05:24.061 "name": "Nvme0" 00:05:24.061 }, 00:05:24.061 "method": "bdev_nvme_attach_controller" 00:05:24.061 }, 00:05:24.061 { 00:05:24.061 "method": "bdev_wait_for_examine" 00:05:24.061 } 00:05:24.061 ] 00:05:24.061 } 00:05:24.061 ] 00:05:24.061 } 00:05:24.061 [2024-11-15 12:40:32.664705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.061 [2024-11-15 12:40:32.694777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.061 [2024-11-15 12:40:32.724562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.321  [2024-11-15T12:40:32.991Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:24.321 00:05:24.321 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:24.321 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:24.321 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:24.321 12:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.321 [2024-11-15 12:40:32.987024] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:24.321 [2024-11-15 12:40:32.987136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59688 ] 00:05:24.592 { 00:05:24.592 "subsystems": [ 00:05:24.592 { 00:05:24.592 "subsystem": "bdev", 00:05:24.592 "config": [ 00:05:24.592 { 00:05:24.592 "params": { 00:05:24.593 "trtype": "pcie", 00:05:24.593 "traddr": "0000:00:10.0", 00:05:24.593 "name": "Nvme0" 00:05:24.593 }, 00:05:24.593 "method": "bdev_nvme_attach_controller" 00:05:24.593 }, 00:05:24.593 { 00:05:24.593 "method": "bdev_wait_for_examine" 00:05:24.593 } 00:05:24.593 ] 00:05:24.593 } 00:05:24.593 ] 00:05:24.593 } 00:05:24.593 [2024-11-15 12:40:33.129872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.593 [2024-11-15 12:40:33.159276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.593 [2024-11-15 12:40:33.189918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.857  [2024-11-15T12:40:33.527Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:24.857 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:24.857 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.857 [2024-11-15 12:40:33.450468] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:24.857 [2024-11-15 12:40:33.450571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59704 ] 00:05:24.857 { 00:05:24.857 "subsystems": [ 00:05:24.857 { 00:05:24.857 "subsystem": "bdev", 00:05:24.857 "config": [ 00:05:24.857 { 00:05:24.857 "params": { 00:05:24.857 "trtype": "pcie", 00:05:24.857 "traddr": "0000:00:10.0", 00:05:24.857 "name": "Nvme0" 00:05:24.857 }, 00:05:24.857 "method": "bdev_nvme_attach_controller" 00:05:24.857 }, 00:05:24.857 { 00:05:24.857 "method": "bdev_wait_for_examine" 00:05:24.857 } 00:05:24.857 ] 00:05:24.857 } 00:05:24.857 ] 00:05:24.857 } 00:05:25.117 [2024-11-15 12:40:33.590209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.117 [2024-11-15 12:40:33.617618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.117 [2024-11-15 12:40:33.645486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.117  [2024-11-15T12:40:34.045Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:25.375 00:05:25.375 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:25.375 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:25.375 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:25.375 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:25.375 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:25.375 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:25.375 12:40:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:25.943 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:25.943 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:25.943 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:25.943 { 00:05:25.943 "subsystems": [ 00:05:25.943 { 00:05:25.943 "subsystem": "bdev", 00:05:25.943 "config": [ 00:05:25.943 { 00:05:25.943 "params": { 00:05:25.943 "trtype": "pcie", 00:05:25.943 "traddr": "0000:00:10.0", 00:05:25.943 "name": "Nvme0" 00:05:25.943 }, 00:05:25.943 "method": "bdev_nvme_attach_controller" 00:05:25.943 }, 00:05:25.943 { 00:05:25.943 "method": "bdev_wait_for_examine" 00:05:25.943 } 00:05:25.943 ] 00:05:25.943 } 00:05:25.943 ] 00:05:25.943 } 00:05:25.943 [2024-11-15 12:40:34.416136] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:25.943 [2024-11-15 12:40:34.416227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:05:25.943 [2024-11-15 12:40:34.560767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.943 [2024-11-15 12:40:34.587972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.201 [2024-11-15 12:40:34.615626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.201  [2024-11-15T12:40:34.871Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:26.201 00:05:26.201 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:26.201 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:26.201 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:26.201 12:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:26.460 [2024-11-15 12:40:34.873871] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:26.460 [2024-11-15 12:40:34.873970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:05:26.460 { 00:05:26.460 "subsystems": [ 00:05:26.460 { 00:05:26.460 "subsystem": "bdev", 00:05:26.460 "config": [ 00:05:26.460 { 00:05:26.460 "params": { 00:05:26.460 "trtype": "pcie", 00:05:26.460 "traddr": "0000:00:10.0", 00:05:26.460 "name": "Nvme0" 00:05:26.460 }, 00:05:26.460 "method": "bdev_nvme_attach_controller" 00:05:26.460 }, 00:05:26.460 { 00:05:26.460 "method": "bdev_wait_for_examine" 00:05:26.460 } 00:05:26.460 ] 00:05:26.460 } 00:05:26.460 ] 00:05:26.460 } 00:05:26.460 [2024-11-15 12:40:35.012625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.460 [2024-11-15 12:40:35.039497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.460 [2024-11-15 12:40:35.066398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.719  [2024-11-15T12:40:35.389Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:26.719 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:26.719 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:26.719 [2024-11-15 12:40:35.335612] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:26.719 [2024-11-15 12:40:35.335720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59752 ] 00:05:26.719 { 00:05:26.719 "subsystems": [ 00:05:26.719 { 00:05:26.719 "subsystem": "bdev", 00:05:26.719 "config": [ 00:05:26.719 { 00:05:26.719 "params": { 00:05:26.719 "trtype": "pcie", 00:05:26.719 "traddr": "0000:00:10.0", 00:05:26.719 "name": "Nvme0" 00:05:26.719 }, 00:05:26.719 "method": "bdev_nvme_attach_controller" 00:05:26.719 }, 00:05:26.719 { 00:05:26.719 "method": "bdev_wait_for_examine" 00:05:26.719 } 00:05:26.719 ] 00:05:26.719 } 00:05:26.719 ] 00:05:26.719 } 00:05:26.978 [2024-11-15 12:40:35.470829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.978 [2024-11-15 12:40:35.497657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.978 [2024-11-15 12:40:35.524700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.978  [2024-11-15T12:40:35.907Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:27.237 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:27.237 12:40:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.804 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:27.804 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:27.805 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:27.805 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.805 [2024-11-15 12:40:36.225758] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:27.805 [2024-11-15 12:40:36.225857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:05:27.805 { 00:05:27.805 "subsystems": [ 00:05:27.805 { 00:05:27.805 "subsystem": "bdev", 00:05:27.805 "config": [ 00:05:27.805 { 00:05:27.805 "params": { 00:05:27.805 "trtype": "pcie", 00:05:27.805 "traddr": "0000:00:10.0", 00:05:27.805 "name": "Nvme0" 00:05:27.805 }, 00:05:27.805 "method": "bdev_nvme_attach_controller" 00:05:27.805 }, 00:05:27.805 { 00:05:27.805 "method": "bdev_wait_for_examine" 00:05:27.805 } 00:05:27.805 ] 00:05:27.805 } 00:05:27.805 ] 00:05:27.805 } 00:05:27.805 [2024-11-15 12:40:36.361925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.805 [2024-11-15 12:40:36.390094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.805 [2024-11-15 12:40:36.417256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.064  [2024-11-15T12:40:36.734Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:28.064 00:05:28.064 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:28.064 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:28.064 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:28.064 12:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:28.064 { 00:05:28.064 "subsystems": [ 00:05:28.064 { 00:05:28.064 "subsystem": "bdev", 00:05:28.064 "config": [ 00:05:28.064 { 00:05:28.064 "params": { 00:05:28.064 "trtype": "pcie", 00:05:28.064 "traddr": "0000:00:10.0", 00:05:28.064 "name": "Nvme0" 00:05:28.064 }, 00:05:28.064 "method": "bdev_nvme_attach_controller" 00:05:28.064 }, 00:05:28.064 { 00:05:28.064 "method": "bdev_wait_for_examine" 00:05:28.064 } 00:05:28.064 ] 00:05:28.064 } 00:05:28.064 ] 00:05:28.064 } 00:05:28.064 [2024-11-15 12:40:36.684892] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:28.064 [2024-11-15 12:40:36.684988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:05:28.323 [2024-11-15 12:40:36.828654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.323 [2024-11-15 12:40:36.856275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.323 [2024-11-15 12:40:36.883570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.323  [2024-11-15T12:40:37.252Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:28.582 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:28.582 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:28.582 [2024-11-15 12:40:37.141005] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:28.582 [2024-11-15 12:40:37.141114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59794 ] 00:05:28.582 { 00:05:28.582 "subsystems": [ 00:05:28.582 { 00:05:28.582 "subsystem": "bdev", 00:05:28.582 "config": [ 00:05:28.582 { 00:05:28.582 "params": { 00:05:28.582 "trtype": "pcie", 00:05:28.582 "traddr": "0000:00:10.0", 00:05:28.582 "name": "Nvme0" 00:05:28.582 }, 00:05:28.582 "method": "bdev_nvme_attach_controller" 00:05:28.582 }, 00:05:28.582 { 00:05:28.582 "method": "bdev_wait_for_examine" 00:05:28.582 } 00:05:28.582 ] 00:05:28.582 } 00:05:28.582 ] 00:05:28.582 } 00:05:28.841 [2024-11-15 12:40:37.280563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.841 [2024-11-15 12:40:37.308985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.841 [2024-11-15 12:40:37.336016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.841  [2024-11-15T12:40:37.770Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:29.100 00:05:29.100 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:29.100 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:29.100 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:29.100 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:29.100 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:29.100 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:29.100 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:29.359 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:29.359 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:29.359 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:29.359 12:40:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:29.359 [2024-11-15 12:40:38.021930] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:29.359 [2024-11-15 12:40:38.022047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59808 ] 00:05:29.618 { 00:05:29.618 "subsystems": [ 00:05:29.618 { 00:05:29.618 "subsystem": "bdev", 00:05:29.618 "config": [ 00:05:29.618 { 00:05:29.618 "params": { 00:05:29.618 "trtype": "pcie", 00:05:29.618 "traddr": "0000:00:10.0", 00:05:29.618 "name": "Nvme0" 00:05:29.618 }, 00:05:29.618 "method": "bdev_nvme_attach_controller" 00:05:29.618 }, 00:05:29.618 { 00:05:29.618 "method": "bdev_wait_for_examine" 00:05:29.618 } 00:05:29.618 ] 00:05:29.618 } 00:05:29.618 ] 00:05:29.618 } 00:05:29.618 [2024-11-15 12:40:38.158056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.618 [2024-11-15 12:40:38.186225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.618 [2024-11-15 12:40:38.213889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.877  [2024-11-15T12:40:38.547Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:29.877 00:05:29.877 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:29.877 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:29.877 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:29.877 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:29.877 [2024-11-15 12:40:38.484125] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:29.877 [2024-11-15 12:40:38.484221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59827 ] 00:05:29.877 { 00:05:29.877 "subsystems": [ 00:05:29.877 { 00:05:29.877 "subsystem": "bdev", 00:05:29.877 "config": [ 00:05:29.877 { 00:05:29.877 "params": { 00:05:29.877 "trtype": "pcie", 00:05:29.877 "traddr": "0000:00:10.0", 00:05:29.877 "name": "Nvme0" 00:05:29.877 }, 00:05:29.877 "method": "bdev_nvme_attach_controller" 00:05:29.877 }, 00:05:29.877 { 00:05:29.877 "method": "bdev_wait_for_examine" 00:05:29.877 } 00:05:29.877 ] 00:05:29.877 } 00:05:29.877 ] 00:05:29.877 } 00:05:30.136 [2024-11-15 12:40:38.628141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.136 [2024-11-15 12:40:38.658931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.136 [2024-11-15 12:40:38.690413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.136  [2024-11-15T12:40:39.064Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:30.394 00:05:30.394 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:30.394 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:30.395 12:40:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.395 { 00:05:30.395 "subsystems": [ 00:05:30.395 { 00:05:30.395 "subsystem": "bdev", 00:05:30.395 "config": [ 00:05:30.395 { 00:05:30.395 "params": { 00:05:30.395 "trtype": "pcie", 00:05:30.395 "traddr": "0000:00:10.0", 00:05:30.395 "name": "Nvme0" 00:05:30.395 }, 00:05:30.395 "method": "bdev_nvme_attach_controller" 00:05:30.395 }, 00:05:30.395 { 00:05:30.395 "method": "bdev_wait_for_examine" 00:05:30.395 } 00:05:30.395 ] 00:05:30.395 } 00:05:30.395 ] 00:05:30.395 } 00:05:30.395 [2024-11-15 12:40:38.966759] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:30.395 [2024-11-15 12:40:38.966859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:05:30.654 [2024-11-15 12:40:39.109857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.654 [2024-11-15 12:40:39.139178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.654 [2024-11-15 12:40:39.166449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.654  [2024-11-15T12:40:39.583Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:30.913 00:05:30.913 00:05:30.913 real 0m11.212s 00:05:30.913 user 0m8.365s 00:05:30.913 sys 0m3.433s 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.913 ************************************ 00:05:30.913 END TEST dd_rw 00:05:30.913 ************************************ 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.913 ************************************ 00:05:30.913 START TEST dd_rw_offset 00:05:30.913 ************************************ 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:30.913 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:30.914 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ecaqa8g089y9slqdzc7htxesof7kv0t0juez5s7uxistwsa7k7a0e7ffu94esei3cg14jssyb624fha4y49nfthqm4wm90t878q4g5lsv9a51njxznpcgr5jbkmi6o8prm823mj4u3f0085umsjznw1kw6wt294ecqgfwgmay0wx0p00zl2k39t2kz6jlvpkumhu3atsl2vhfq8lk6wnmr4bgqm0sj96rmvnc3jp5ibbhvsjcd2hlr728a1291pv6wmj8bpof1ehc93zut6qmow7u5vzx0bq0hmhdpup8dkbglls8u7frb22oykddotz1htc16trf3tz6uph16muuccqj5fxenn6cvdrri4hip79ksr9186pi5ced5hezbisvrffffigggzbsrv449yzhrt0mcd4jzwoh0tm7quwdiye7rz8saljkcw0zfvyn3xyrpg9qr0beco0b42wtatbbgenh0qjmr0qwkdhh01mr03f9y56dcs7ykmsh66xh2ar98118eoutteb8hw1mshjkg8ndekg8a6cpvvxlp0kf9ubunkr6ig00o4hvcshrtsqjdkvi83f8ewuyaktzhit70ztl8a4k7y40dgx7dp9c3fdijwgmhdrbzin11rxq5o2l9vvi47on3vu5cb93ne7aeu3a5xjj32hs98y32nk085brijzkv9r8f30urraokalqwes65izotweyrrz0tgkrz4irpifm2k67g4wd5xf4gncsuzu34m5b736jpqxddlzcre02p22lhfm1n8e8fkmopq0zvitg1tp9cd33u6q03gft3517d7v2d5ug0d0udvjc749pd2ximuasi9ezjnuu5syush728kodw32nngmfcilwya2cn1gmt3vwo7t1nmqe4e310z795c8203ea9wechtebemv0us8rmrmxtuee2fupgdo8n849p7s106o7dt7cyjgnlt2h5qkzbllyxwkcmecnwlh5xg9gl6vwrc3o58gzt5hig81d120dtz0xurydkkra7rq1nn8yahbhqk7uein8hp614w4c351f0i6b8t1c1hv8jfhe4b1cpp2bnjofcg80mfz0fklxd2jlwcpc5i03s0a49f06l1hp7ot3oo6cgx1akyj0cl109t33122n7m2hitbgitj3676ngfwduulppsqs5vtrpi2yeh5pgpltab47v1zcl6f416qmvdh2hlm7e74esyr3aer7edbjm39vxlplgee0z7sbvq8hrurnjmxugfkfewh7yok5huwdyxrd00s9syocf75dvtajjp07bwv01bo96mxlg994joomtagtek1rel263vd8bb10tdm6vpcv9n9i9mmff9yy41eeq6bajxgflernwqacwskpzc463pu2ij5k28nl3jlm462z3nrx4fgcv2eohcfhmy3glthefy89fje8ieorpv9zh0ka4t4klq0vi48l9jpcd7punh9i9i33bzj24kw1inkuswxfukj2axewr2mv93jvwexum5rsjbn92wk9ji84fqgyhmz3twc9sc10k9p0iq5g8lxrvuxdaf4gbndqrgksoxtt0eaf89kjli7gy1i2fzb5uenap88ja6ssr8ikrqmftvi1yxc5wgcudvfmgvkh41cxwofdztx6xjq77xfuss1cxupfgso5cz9ypzzttr9uldskd9odch0btbng44jbsck2s9sxgpel75m1u9cjk330xqj5unx4j31ezq6hmgido3yr3sywt14n7gf2ap8v73eigusse4mublw7r89xr8de0k3abdq5o3s89spptsnrcvq341d5g543dr4iln4dejgbn2g27k99eubnckvx5xtu510ul8r59gk3lpek6mmuj3d4y9o8ivi9xf4gvacptxxoe48wzqxmhp37r23qybxsktg2iv5zrucp7qc032e5wm7k2knd33gw05x7q4gkcd96c27u1b4f46q342jxv132veb3x7uu7ps4o8g93xq3c0gdmc44tl20x8apntoppl4f63w6jngynnxaigi9gxc86z0bf2pzrzbxzttjvjraa9h9mgv5qywlmiiegf5634yjo1ofdct259mefa74fgpx5m5bgojxm4fapqpu3ksrbjet5zbvh99l4al6jrr6fjdvjwy5exftqkn1z86zh3rwf5rprmearabx11sx5sr9hcczar8vjyv4nq0gpx4bnxi4zsf52ynvj3npzr21mkymfuorsfxqfotruy1mfiaerlxitk2q2tek72hg98dp3lvrwwwlvz4a3uqnymhkrv583lmaaycaqn3v4gknvwlfkqs5j5se57ta7q79t0gai12xixy68wcw2kmvv2kcwbyi303ls7zg4f8p7oeazqwgg9tixk2al35ojnt497ztynb4g8q48613p5klqs6qa2jse17g49nlic5pzxm0wlhr43ajd18bzk58vch8yysikl7co3fpf0a9khoyth3gczgndvpfiqxmomky80saqp9eh9rhms1jzuaz8pykq9fv06swzaeug8kd0oaqp2vj78lwg3fcnyqrgatl7mrvsnp37pqg3dac461898l6xhdev9b6jk6eqy10u79u22unti317vi6sd5u8sjgqji1rw1idxq0tre5j1jun79oya8ciorzhgun3zte5kaek7hzf8m37h0u067xyh8fxqz90o9xsqugsk19vp09zjv27wb29tpp1boeqhmyyvw1ldc0u3knl9kvx54e3ylholnl23p745otrg5bc5rko8z68v984keirc79ec1kzyyc3pp70i5c1yxrvbcfuvjnhbiucvj2e4n2jq4j3henxu7zgx3s8clygfulghm1fq7bh80z0ot176t0q696cc3kb32kit79pat453a1k8pyrmb7f2a781a8iqwfnsw2begkcyas9d7yxh361cnd7rhx2gkvyoswk06dvmdikj00hj2voojyfl4bvv4mzscr9xhx81fm9xml007ss5u0tim4i01tv22dzucsy06ne8fevvq9njrspqx24z75pa9v4c80jsnac9c0lbqvk1dibm6n08b79toqb1ftzszci512z1jh55vkau9pexw2x1auzlqwejpq0xbqc2ya9wmdh8c93j3ttaasscg2j8e5h68ybt2pcqi0lw7eqajrg2jqn2fby090f2egbvu3192asv6b1a8mr07a18crzxarxzopjwky19r2qmr2r95r67bgke2f1jv8kfg3eguj78qqkrl68bc1adlhm85dhbay6cjv285bstamemvg5bw3pafyae2svy2wvqka0ud6qoo7v9o7us137wt7zatzx0vuyjs87yv09rejlox6cu4rjbv3hxjuytn4f4w419jz1hd4frelx4mtflj162ckxr9d14zr54cwbmxp2bbdf4vlzk1s2i511te9nqazely1v1o5atvwhgy98116402pxmxh6owxiz122lzy64zzta2usu6rja0btrpe0qcr5tt7lklopukelb93k9hgui2sfbfj70v43vyi8qqr9tzdaewkp2mofsxl7xbu0shnm419npy903bg5vh9jg4r8d0qtf7oc8hqxt57n525yfsfuds75njyb8fvcc368upnpsstjoj6eyqdo6yekkzah7iss5kixg5mr6144x1och1ugyjix48ko9n1qe6zyd93s71b4nxj98ae9l4segnkhilwl05i2ypvqp2aw4xmkcwfosl0how5wlhyfofaizfxim3woidv1bvhmcp22omkylikyer1b71jniapy6n7829kef1yzs6k9x7yr0k4uywbjgn7lucgfys6bqd6v4l6e69blqu1f6a90vm363itxyyuv5kixx06ttqbtftjqtxmep3yu5790e61irvgmm5jvcgh3boe26t45fq9bgk2vu7aknfxhegch723srxu9j7c5xnrcedcblxq2mfmndc23xwovpe0vvfdeklxxwyne7mhzqq5j3qwyqs0dr0an5qjrhfuidvooxteh78tad87wxer88f79quvqborcjakauzc25j02g272uynz4b 00:05:30.914 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:30.914 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:30.914 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:30.914 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:30.914 [2024-11-15 12:40:39.527449] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:30.914 [2024-11-15 12:40:39.527556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59873 ] 00:05:30.914 { 00:05:30.914 "subsystems": [ 00:05:30.914 { 00:05:30.914 "subsystem": "bdev", 00:05:30.914 "config": [ 00:05:30.914 { 00:05:30.914 "params": { 00:05:30.914 "trtype": "pcie", 00:05:30.914 "traddr": "0000:00:10.0", 00:05:30.914 "name": "Nvme0" 00:05:30.914 }, 00:05:30.914 "method": "bdev_nvme_attach_controller" 00:05:30.914 }, 00:05:30.914 { 00:05:30.914 "method": "bdev_wait_for_examine" 00:05:30.914 } 00:05:30.914 ] 00:05:30.914 } 00:05:30.914 ] 00:05:30.914 } 00:05:31.173 [2024-11-15 12:40:39.672163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.173 [2024-11-15 12:40:39.703325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.173 [2024-11-15 12:40:39.735878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.173  [2024-11-15T12:40:40.102Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:31.432 00:05:31.432 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:31.432 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:31.432 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:31.432 12:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:31.432 [2024-11-15 12:40:39.979400] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:31.432 [2024-11-15 12:40:39.979500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:05:31.432 { 00:05:31.433 "subsystems": [ 00:05:31.433 { 00:05:31.433 "subsystem": "bdev", 00:05:31.433 "config": [ 00:05:31.433 { 00:05:31.433 "params": { 00:05:31.433 "trtype": "pcie", 00:05:31.433 "traddr": "0000:00:10.0", 00:05:31.433 "name": "Nvme0" 00:05:31.433 }, 00:05:31.433 "method": "bdev_nvme_attach_controller" 00:05:31.433 }, 00:05:31.433 { 00:05:31.433 "method": "bdev_wait_for_examine" 00:05:31.433 } 00:05:31.433 ] 00:05:31.433 } 00:05:31.433 ] 00:05:31.433 } 00:05:31.692 [2024-11-15 12:40:40.115479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.692 [2024-11-15 12:40:40.147724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.692 [2024-11-15 12:40:40.178887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.692  [2024-11-15T12:40:40.622Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:31.952 00:05:31.952 12:40:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ecaqa8g089y9slqdzc7htxesof7kv0t0juez5s7uxistwsa7k7a0e7ffu94esei3cg14jssyb624fha4y49nfthqm4wm90t878q4g5lsv9a51njxznpcgr5jbkmi6o8prm823mj4u3f0085umsjznw1kw6wt294ecqgfwgmay0wx0p00zl2k39t2kz6jlvpkumhu3atsl2vhfq8lk6wnmr4bgqm0sj96rmvnc3jp5ibbhvsjcd2hlr728a1291pv6wmj8bpof1ehc93zut6qmow7u5vzx0bq0hmhdpup8dkbglls8u7frb22oykddotz1htc16trf3tz6uph16muuccqj5fxenn6cvdrri4hip79ksr9186pi5ced5hezbisvrffffigggzbsrv449yzhrt0mcd4jzwoh0tm7quwdiye7rz8saljkcw0zfvyn3xyrpg9qr0beco0b42wtatbbgenh0qjmr0qwkdhh01mr03f9y56dcs7ykmsh66xh2ar98118eoutteb8hw1mshjkg8ndekg8a6cpvvxlp0kf9ubunkr6ig00o4hvcshrtsqjdkvi83f8ewuyaktzhit70ztl8a4k7y40dgx7dp9c3fdijwgmhdrbzin11rxq5o2l9vvi47on3vu5cb93ne7aeu3a5xjj32hs98y32nk085brijzkv9r8f30urraokalqwes65izotweyrrz0tgkrz4irpifm2k67g4wd5xf4gncsuzu34m5b736jpqxddlzcre02p22lhfm1n8e8fkmopq0zvitg1tp9cd33u6q03gft3517d7v2d5ug0d0udvjc749pd2ximuasi9ezjnuu5syush728kodw32nngmfcilwya2cn1gmt3vwo7t1nmqe4e310z795c8203ea9wechtebemv0us8rmrmxtuee2fupgdo8n849p7s106o7dt7cyjgnlt2h5qkzbllyxwkcmecnwlh5xg9gl6vwrc3o58gzt5hig81d120dtz0xurydkkra7rq1nn8yahbhqk7uein8hp614w4c351f0i6b8t1c1hv8jfhe4b1cpp2bnjofcg80mfz0fklxd2jlwcpc5i03s0a49f06l1hp7ot3oo6cgx1akyj0cl109t33122n7m2hitbgitj3676ngfwduulppsqs5vtrpi2yeh5pgpltab47v1zcl6f416qmvdh2hlm7e74esyr3aer7edbjm39vxlplgee0z7sbvq8hrurnjmxugfkfewh7yok5huwdyxrd00s9syocf75dvtajjp07bwv01bo96mxlg994joomtagtek1rel263vd8bb10tdm6vpcv9n9i9mmff9yy41eeq6bajxgflernwqacwskpzc463pu2ij5k28nl3jlm462z3nrx4fgcv2eohcfhmy3glthefy89fje8ieorpv9zh0ka4t4klq0vi48l9jpcd7punh9i9i33bzj24kw1inkuswxfukj2axewr2mv93jvwexum5rsjbn92wk9ji84fqgyhmz3twc9sc10k9p0iq5g8lxrvuxdaf4gbndqrgksoxtt0eaf89kjli7gy1i2fzb5uenap88ja6ssr8ikrqmftvi1yxc5wgcudvfmgvkh41cxwofdztx6xjq77xfuss1cxupfgso5cz9ypzzttr9uldskd9odch0btbng44jbsck2s9sxgpel75m1u9cjk330xqj5unx4j31ezq6hmgido3yr3sywt14n7gf2ap8v73eigusse4mublw7r89xr8de0k3abdq5o3s89spptsnrcvq341d5g543dr4iln4dejgbn2g27k99eubnckvx5xtu510ul8r59gk3lpek6mmuj3d4y9o8ivi9xf4gvacptxxoe48wzqxmhp37r23qybxsktg2iv5zrucp7qc032e5wm7k2knd33gw05x7q4gkcd96c27u1b4f46q342jxv132veb3x7uu7ps4o8g93xq3c0gdmc44tl20x8apntoppl4f63w6jngynnxaigi9gxc86z0bf2pzrzbxzttjvjraa9h9mgv5qywlmiiegf5634yjo1ofdct259mefa74fgpx5m5bgojxm4fapqpu3ksrbjet5zbvh99l4al6jrr6fjdvjwy5exftqkn1z86zh3rwf5rprmearabx11sx5sr9hcczar8vjyv4nq0gpx4bnxi4zsf52ynvj3npzr21mkymfuorsfxqfotruy1mfiaerlxitk2q2tek72hg98dp3lvrwwwlvz4a3uqnymhkrv583lmaaycaqn3v4gknvwlfkqs5j5se57ta7q79t0gai12xixy68wcw2kmvv2kcwbyi303ls7zg4f8p7oeazqwgg9tixk2al35ojnt497ztynb4g8q48613p5klqs6qa2jse17g49nlic5pzxm0wlhr43ajd18bzk58vch8yysikl7co3fpf0a9khoyth3gczgndvpfiqxmomky80saqp9eh9rhms1jzuaz8pykq9fv06swzaeug8kd0oaqp2vj78lwg3fcnyqrgatl7mrvsnp37pqg3dac461898l6xhdev9b6jk6eqy10u79u22unti317vi6sd5u8sjgqji1rw1idxq0tre5j1jun79oya8ciorzhgun3zte5kaek7hzf8m37h0u067xyh8fxqz90o9xsqugsk19vp09zjv27wb29tpp1boeqhmyyvw1ldc0u3knl9kvx54e3ylholnl23p745otrg5bc5rko8z68v984keirc79ec1kzyyc3pp70i5c1yxrvbcfuvjnhbiucvj2e4n2jq4j3henxu7zgx3s8clygfulghm1fq7bh80z0ot176t0q696cc3kb32kit79pat453a1k8pyrmb7f2a781a8iqwfnsw2begkcyas9d7yxh361cnd7rhx2gkvyoswk06dvmdikj00hj2voojyfl4bvv4mzscr9xhx81fm9xml007ss5u0tim4i01tv22dzucsy06ne8fevvq9njrspqx24z75pa9v4c80jsnac9c0lbqvk1dibm6n08b79toqb1ftzszci512z1jh55vkau9pexw2x1auzlqwejpq0xbqc2ya9wmdh8c93j3ttaasscg2j8e5h68ybt2pcqi0lw7eqajrg2jqn2fby090f2egbvu3192asv6b1a8mr07a18crzxarxzopjwky19r2qmr2r95r67bgke2f1jv8kfg3eguj78qqkrl68bc1adlhm85dhbay6cjv285bstamemvg5bw3pafyae2svy2wvqka0ud6qoo7v9o7us137wt7zatzx0vuyjs87yv09rejlox6cu4rjbv3hxjuytn4f4w419jz1hd4frelx4mtflj162ckxr9d14zr54cwbmxp2bbdf4vlzk1s2i511te9nqazely1v1o5atvwhgy98116402pxmxh6owxiz122lzy64zzta2usu6rja0btrpe0qcr5tt7lklopukelb93k9hgui2sfbfj70v43vyi8qqr9tzdaewkp2mofsxl7xbu0shnm419npy903bg5vh9jg4r8d0qtf7oc8hqxt57n525yfsfuds75njyb8fvcc368upnpsstjoj6eyqdo6yekkzah7iss5kixg5mr6144x1och1ugyjix48ko9n1qe6zyd93s71b4nxj98ae9l4segnkhilwl05i2ypvqp2aw4xmkcwfosl0how5wlhyfofaizfxim3woidv1bvhmcp22omkylikyer1b71jniapy6n7829kef1yzs6k9x7yr0k4uywbjgn7lucgfys6bqd6v4l6e69blqu1f6a90vm363itxyyuv5kixx06ttqbtftjqtxmep3yu5790e61irvgmm5jvcgh3boe26t45fq9bgk2vu7aknfxhegch723srxu9j7c5xnrcedcblxq2mfmndc23xwovpe0vvfdeklxxwyne7mhzqq5j3qwyqs0dr0an5qjrhfuidvooxteh78tad87wxer88f79quvqborcjakauzc25j02g272uynz4b == \e\c\a\q\a\8\g\0\8\9\y\9\s\l\q\d\z\c\7\h\t\x\e\s\o\f\7\k\v\0\t\0\j\u\e\z\5\s\7\u\x\i\s\t\w\s\a\7\k\7\a\0\e\7\f\f\u\9\4\e\s\e\i\3\c\g\1\4\j\s\s\y\b\6\2\4\f\h\a\4\y\4\9\n\f\t\h\q\m\4\w\m\9\0\t\8\7\8\q\4\g\5\l\s\v\9\a\5\1\n\j\x\z\n\p\c\g\r\5\j\b\k\m\i\6\o\8\p\r\m\8\2\3\m\j\4\u\3\f\0\0\8\5\u\m\s\j\z\n\w\1\k\w\6\w\t\2\9\4\e\c\q\g\f\w\g\m\a\y\0\w\x\0\p\0\0\z\l\2\k\3\9\t\2\k\z\6\j\l\v\p\k\u\m\h\u\3\a\t\s\l\2\v\h\f\q\8\l\k\6\w\n\m\r\4\b\g\q\m\0\s\j\9\6\r\m\v\n\c\3\j\p\5\i\b\b\h\v\s\j\c\d\2\h\l\r\7\2\8\a\1\2\9\1\p\v\6\w\m\j\8\b\p\o\f\1\e\h\c\9\3\z\u\t\6\q\m\o\w\7\u\5\v\z\x\0\b\q\0\h\m\h\d\p\u\p\8\d\k\b\g\l\l\s\8\u\7\f\r\b\2\2\o\y\k\d\d\o\t\z\1\h\t\c\1\6\t\r\f\3\t\z\6\u\p\h\1\6\m\u\u\c\c\q\j\5\f\x\e\n\n\6\c\v\d\r\r\i\4\h\i\p\7\9\k\s\r\9\1\8\6\p\i\5\c\e\d\5\h\e\z\b\i\s\v\r\f\f\f\f\i\g\g\g\z\b\s\r\v\4\4\9\y\z\h\r\t\0\m\c\d\4\j\z\w\o\h\0\t\m\7\q\u\w\d\i\y\e\7\r\z\8\s\a\l\j\k\c\w\0\z\f\v\y\n\3\x\y\r\p\g\9\q\r\0\b\e\c\o\0\b\4\2\w\t\a\t\b\b\g\e\n\h\0\q\j\m\r\0\q\w\k\d\h\h\0\1\m\r\0\3\f\9\y\5\6\d\c\s\7\y\k\m\s\h\6\6\x\h\2\a\r\9\8\1\1\8\e\o\u\t\t\e\b\8\h\w\1\m\s\h\j\k\g\8\n\d\e\k\g\8\a\6\c\p\v\v\x\l\p\0\k\f\9\u\b\u\n\k\r\6\i\g\0\0\o\4\h\v\c\s\h\r\t\s\q\j\d\k\v\i\8\3\f\8\e\w\u\y\a\k\t\z\h\i\t\7\0\z\t\l\8\a\4\k\7\y\4\0\d\g\x\7\d\p\9\c\3\f\d\i\j\w\g\m\h\d\r\b\z\i\n\1\1\r\x\q\5\o\2\l\9\v\v\i\4\7\o\n\3\v\u\5\c\b\9\3\n\e\7\a\e\u\3\a\5\x\j\j\3\2\h\s\9\8\y\3\2\n\k\0\8\5\b\r\i\j\z\k\v\9\r\8\f\3\0\u\r\r\a\o\k\a\l\q\w\e\s\6\5\i\z\o\t\w\e\y\r\r\z\0\t\g\k\r\z\4\i\r\p\i\f\m\2\k\6\7\g\4\w\d\5\x\f\4\g\n\c\s\u\z\u\3\4\m\5\b\7\3\6\j\p\q\x\d\d\l\z\c\r\e\0\2\p\2\2\l\h\f\m\1\n\8\e\8\f\k\m\o\p\q\0\z\v\i\t\g\1\t\p\9\c\d\3\3\u\6\q\0\3\g\f\t\3\5\1\7\d\7\v\2\d\5\u\g\0\d\0\u\d\v\j\c\7\4\9\p\d\2\x\i\m\u\a\s\i\9\e\z\j\n\u\u\5\s\y\u\s\h\7\2\8\k\o\d\w\3\2\n\n\g\m\f\c\i\l\w\y\a\2\c\n\1\g\m\t\3\v\w\o\7\t\1\n\m\q\e\4\e\3\1\0\z\7\9\5\c\8\2\0\3\e\a\9\w\e\c\h\t\e\b\e\m\v\0\u\s\8\r\m\r\m\x\t\u\e\e\2\f\u\p\g\d\o\8\n\8\4\9\p\7\s\1\0\6\o\7\d\t\7\c\y\j\g\n\l\t\2\h\5\q\k\z\b\l\l\y\x\w\k\c\m\e\c\n\w\l\h\5\x\g\9\g\l\6\v\w\r\c\3\o\5\8\g\z\t\5\h\i\g\8\1\d\1\2\0\d\t\z\0\x\u\r\y\d\k\k\r\a\7\r\q\1\n\n\8\y\a\h\b\h\q\k\7\u\e\i\n\8\h\p\6\1\4\w\4\c\3\5\1\f\0\i\6\b\8\t\1\c\1\h\v\8\j\f\h\e\4\b\1\c\p\p\2\b\n\j\o\f\c\g\8\0\m\f\z\0\f\k\l\x\d\2\j\l\w\c\p\c\5\i\0\3\s\0\a\4\9\f\0\6\l\1\h\p\7\o\t\3\o\o\6\c\g\x\1\a\k\y\j\0\c\l\1\0\9\t\3\3\1\2\2\n\7\m\2\h\i\t\b\g\i\t\j\3\6\7\6\n\g\f\w\d\u\u\l\p\p\s\q\s\5\v\t\r\p\i\2\y\e\h\5\p\g\p\l\t\a\b\4\7\v\1\z\c\l\6\f\4\1\6\q\m\v\d\h\2\h\l\m\7\e\7\4\e\s\y\r\3\a\e\r\7\e\d\b\j\m\3\9\v\x\l\p\l\g\e\e\0\z\7\s\b\v\q\8\h\r\u\r\n\j\m\x\u\g\f\k\f\e\w\h\7\y\o\k\5\h\u\w\d\y\x\r\d\0\0\s\9\s\y\o\c\f\7\5\d\v\t\a\j\j\p\0\7\b\w\v\0\1\b\o\9\6\m\x\l\g\9\9\4\j\o\o\m\t\a\g\t\e\k\1\r\e\l\2\6\3\v\d\8\b\b\1\0\t\d\m\6\v\p\c\v\9\n\9\i\9\m\m\f\f\9\y\y\4\1\e\e\q\6\b\a\j\x\g\f\l\e\r\n\w\q\a\c\w\s\k\p\z\c\4\6\3\p\u\2\i\j\5\k\2\8\n\l\3\j\l\m\4\6\2\z\3\n\r\x\4\f\g\c\v\2\e\o\h\c\f\h\m\y\3\g\l\t\h\e\f\y\8\9\f\j\e\8\i\e\o\r\p\v\9\z\h\0\k\a\4\t\4\k\l\q\0\v\i\4\8\l\9\j\p\c\d\7\p\u\n\h\9\i\9\i\3\3\b\z\j\2\4\k\w\1\i\n\k\u\s\w\x\f\u\k\j\2\a\x\e\w\r\2\m\v\9\3\j\v\w\e\x\u\m\5\r\s\j\b\n\9\2\w\k\9\j\i\8\4\f\q\g\y\h\m\z\3\t\w\c\9\s\c\1\0\k\9\p\0\i\q\5\g\8\l\x\r\v\u\x\d\a\f\4\g\b\n\d\q\r\g\k\s\o\x\t\t\0\e\a\f\8\9\k\j\l\i\7\g\y\1\i\2\f\z\b\5\u\e\n\a\p\8\8\j\a\6\s\s\r\8\i\k\r\q\m\f\t\v\i\1\y\x\c\5\w\g\c\u\d\v\f\m\g\v\k\h\4\1\c\x\w\o\f\d\z\t\x\6\x\j\q\7\7\x\f\u\s\s\1\c\x\u\p\f\g\s\o\5\c\z\9\y\p\z\z\t\t\r\9\u\l\d\s\k\d\9\o\d\c\h\0\b\t\b\n\g\4\4\j\b\s\c\k\2\s\9\s\x\g\p\e\l\7\5\m\1\u\9\c\j\k\3\3\0\x\q\j\5\u\n\x\4\j\3\1\e\z\q\6\h\m\g\i\d\o\3\y\r\3\s\y\w\t\1\4\n\7\g\f\2\a\p\8\v\7\3\e\i\g\u\s\s\e\4\m\u\b\l\w\7\r\8\9\x\r\8\d\e\0\k\3\a\b\d\q\5\o\3\s\8\9\s\p\p\t\s\n\r\c\v\q\3\4\1\d\5\g\5\4\3\d\r\4\i\l\n\4\d\e\j\g\b\n\2\g\2\7\k\9\9\e\u\b\n\c\k\v\x\5\x\t\u\5\1\0\u\l\8\r\5\9\g\k\3\l\p\e\k\6\m\m\u\j\3\d\4\y\9\o\8\i\v\i\9\x\f\4\g\v\a\c\p\t\x\x\o\e\4\8\w\z\q\x\m\h\p\3\7\r\2\3\q\y\b\x\s\k\t\g\2\i\v\5\z\r\u\c\p\7\q\c\0\3\2\e\5\w\m\7\k\2\k\n\d\3\3\g\w\0\5\x\7\q\4\g\k\c\d\9\6\c\2\7\u\1\b\4\f\4\6\q\3\4\2\j\x\v\1\3\2\v\e\b\3\x\7\u\u\7\p\s\4\o\8\g\9\3\x\q\3\c\0\g\d\m\c\4\4\t\l\2\0\x\8\a\p\n\t\o\p\p\l\4\f\6\3\w\6\j\n\g\y\n\n\x\a\i\g\i\9\g\x\c\8\6\z\0\b\f\2\p\z\r\z\b\x\z\t\t\j\v\j\r\a\a\9\h\9\m\g\v\5\q\y\w\l\m\i\i\e\g\f\5\6\3\4\y\j\o\1\o\f\d\c\t\2\5\9\m\e\f\a\7\4\f\g\p\x\5\m\5\b\g\o\j\x\m\4\f\a\p\q\p\u\3\k\s\r\b\j\e\t\5\z\b\v\h\9\9\l\4\a\l\6\j\r\r\6\f\j\d\v\j\w\y\5\e\x\f\t\q\k\n\1\z\8\6\z\h\3\r\w\f\5\r\p\r\m\e\a\r\a\b\x\1\1\s\x\5\s\r\9\h\c\c\z\a\r\8\v\j\y\v\4\n\q\0\g\p\x\4\b\n\x\i\4\z\s\f\5\2\y\n\v\j\3\n\p\z\r\2\1\m\k\y\m\f\u\o\r\s\f\x\q\f\o\t\r\u\y\1\m\f\i\a\e\r\l\x\i\t\k\2\q\2\t\e\k\7\2\h\g\9\8\d\p\3\l\v\r\w\w\w\l\v\z\4\a\3\u\q\n\y\m\h\k\r\v\5\8\3\l\m\a\a\y\c\a\q\n\3\v\4\g\k\n\v\w\l\f\k\q\s\5\j\5\s\e\5\7\t\a\7\q\7\9\t\0\g\a\i\1\2\x\i\x\y\6\8\w\c\w\2\k\m\v\v\2\k\c\w\b\y\i\3\0\3\l\s\7\z\g\4\f\8\p\7\o\e\a\z\q\w\g\g\9\t\i\x\k\2\a\l\3\5\o\j\n\t\4\9\7\z\t\y\n\b\4\g\8\q\4\8\6\1\3\p\5\k\l\q\s\6\q\a\2\j\s\e\1\7\g\4\9\n\l\i\c\5\p\z\x\m\0\w\l\h\r\4\3\a\j\d\1\8\b\z\k\5\8\v\c\h\8\y\y\s\i\k\l\7\c\o\3\f\p\f\0\a\9\k\h\o\y\t\h\3\g\c\z\g\n\d\v\p\f\i\q\x\m\o\m\k\y\8\0\s\a\q\p\9\e\h\9\r\h\m\s\1\j\z\u\a\z\8\p\y\k\q\9\f\v\0\6\s\w\z\a\e\u\g\8\k\d\0\o\a\q\p\2\v\j\7\8\l\w\g\3\f\c\n\y\q\r\g\a\t\l\7\m\r\v\s\n\p\3\7\p\q\g\3\d\a\c\4\6\1\8\9\8\l\6\x\h\d\e\v\9\b\6\j\k\6\e\q\y\1\0\u\7\9\u\2\2\u\n\t\i\3\1\7\v\i\6\s\d\5\u\8\s\j\g\q\j\i\1\r\w\1\i\d\x\q\0\t\r\e\5\j\1\j\u\n\7\9\o\y\a\8\c\i\o\r\z\h\g\u\n\3\z\t\e\5\k\a\e\k\7\h\z\f\8\m\3\7\h\0\u\0\6\7\x\y\h\8\f\x\q\z\9\0\o\9\x\s\q\u\g\s\k\1\9\v\p\0\9\z\j\v\2\7\w\b\2\9\t\p\p\1\b\o\e\q\h\m\y\y\v\w\1\l\d\c\0\u\3\k\n\l\9\k\v\x\5\4\e\3\y\l\h\o\l\n\l\2\3\p\7\4\5\o\t\r\g\5\b\c\5\r\k\o\8\z\6\8\v\9\8\4\k\e\i\r\c\7\9\e\c\1\k\z\y\y\c\3\p\p\7\0\i\5\c\1\y\x\r\v\b\c\f\u\v\j\n\h\b\i\u\c\v\j\2\e\4\n\2\j\q\4\j\3\h\e\n\x\u\7\z\g\x\3\s\8\c\l\y\g\f\u\l\g\h\m\1\f\q\7\b\h\8\0\z\0\o\t\1\7\6\t\0\q\6\9\6\c\c\3\k\b\3\2\k\i\t\7\9\p\a\t\4\5\3\a\1\k\8\p\y\r\m\b\7\f\2\a\7\8\1\a\8\i\q\w\f\n\s\w\2\b\e\g\k\c\y\a\s\9\d\7\y\x\h\3\6\1\c\n\d\7\r\h\x\2\g\k\v\y\o\s\w\k\0\6\d\v\m\d\i\k\j\0\0\h\j\2\v\o\o\j\y\f\l\4\b\v\v\4\m\z\s\c\r\9\x\h\x\8\1\f\m\9\x\m\l\0\0\7\s\s\5\u\0\t\i\m\4\i\0\1\t\v\2\2\d\z\u\c\s\y\0\6\n\e\8\f\e\v\v\q\9\n\j\r\s\p\q\x\2\4\z\7\5\p\a\9\v\4\c\8\0\j\s\n\a\c\9\c\0\l\b\q\v\k\1\d\i\b\m\6\n\0\8\b\7\9\t\o\q\b\1\f\t\z\s\z\c\i\5\1\2\z\1\j\h\5\5\v\k\a\u\9\p\e\x\w\2\x\1\a\u\z\l\q\w\e\j\p\q\0\x\b\q\c\2\y\a\9\w\m\d\h\8\c\9\3\j\3\t\t\a\a\s\s\c\g\2\j\8\e\5\h\6\8\y\b\t\2\p\c\q\i\0\l\w\7\e\q\a\j\r\g\2\j\q\n\2\f\b\y\0\9\0\f\2\e\g\b\v\u\3\1\9\2\a\s\v\6\b\1\a\8\m\r\0\7\a\1\8\c\r\z\x\a\r\x\z\o\p\j\w\k\y\1\9\r\2\q\m\r\2\r\9\5\r\6\7\b\g\k\e\2\f\1\j\v\8\k\f\g\3\e\g\u\j\7\8\q\q\k\r\l\6\8\b\c\1\a\d\l\h\m\8\5\d\h\b\a\y\6\c\j\v\2\8\5\b\s\t\a\m\e\m\v\g\5\b\w\3\p\a\f\y\a\e\2\s\v\y\2\w\v\q\k\a\0\u\d\6\q\o\o\7\v\9\o\7\u\s\1\3\7\w\t\7\z\a\t\z\x\0\v\u\y\j\s\8\7\y\v\0\9\r\e\j\l\o\x\6\c\u\4\r\j\b\v\3\h\x\j\u\y\t\n\4\f\4\w\4\1\9\j\z\1\h\d\4\f\r\e\l\x\4\m\t\f\l\j\1\6\2\c\k\x\r\9\d\1\4\z\r\5\4\c\w\b\m\x\p\2\b\b\d\f\4\v\l\z\k\1\s\2\i\5\1\1\t\e\9\n\q\a\z\e\l\y\1\v\1\o\5\a\t\v\w\h\g\y\9\8\1\1\6\4\0\2\p\x\m\x\h\6\o\w\x\i\z\1\2\2\l\z\y\6\4\z\z\t\a\2\u\s\u\6\r\j\a\0\b\t\r\p\e\0\q\c\r\5\t\t\7\l\k\l\o\p\u\k\e\l\b\9\3\k\9\h\g\u\i\2\s\f\b\f\j\7\0\v\4\3\v\y\i\8\q\q\r\9\t\z\d\a\e\w\k\p\2\m\o\f\s\x\l\7\x\b\u\0\s\h\n\m\4\1\9\n\p\y\9\0\3\b\g\5\v\h\9\j\g\4\r\8\d\0\q\t\f\7\o\c\8\h\q\x\t\5\7\n\5\2\5\y\f\s\f\u\d\s\7\5\n\j\y\b\8\f\v\c\c\3\6\8\u\p\n\p\s\s\t\j\o\j\6\e\y\q\d\o\6\y\e\k\k\z\a\h\7\i\s\s\5\k\i\x\g\5\m\r\6\1\4\4\x\1\o\c\h\1\u\g\y\j\i\x\4\8\k\o\9\n\1\q\e\6\z\y\d\9\3\s\7\1\b\4\n\x\j\9\8\a\e\9\l\4\s\e\g\n\k\h\i\l\w\l\0\5\i\2\y\p\v\q\p\2\a\w\4\x\m\k\c\w\f\o\s\l\0\h\o\w\5\w\l\h\y\f\o\f\a\i\z\f\x\i\m\3\w\o\i\d\v\1\b\v\h\m\c\p\2\2\o\m\k\y\l\i\k\y\e\r\1\b\7\1\j\n\i\a\p\y\6\n\7\8\2\9\k\e\f\1\y\z\s\6\k\9\x\7\y\r\0\k\4\u\y\w\b\j\g\n\7\l\u\c\g\f\y\s\6\b\q\d\6\v\4\l\6\e\6\9\b\l\q\u\1\f\6\a\9\0\v\m\3\6\3\i\t\x\y\y\u\v\5\k\i\x\x\0\6\t\t\q\b\t\f\t\j\q\t\x\m\e\p\3\y\u\5\7\9\0\e\6\1\i\r\v\g\m\m\5\j\v\c\g\h\3\b\o\e\2\6\t\4\5\f\q\9\b\g\k\2\v\u\7\a\k\n\f\x\h\e\g\c\h\7\2\3\s\r\x\u\9\j\7\c\5\x\n\r\c\e\d\c\b\l\x\q\2\m\f\m\n\d\c\2\3\x\w\o\v\p\e\0\v\v\f\d\e\k\l\x\x\w\y\n\e\7\m\h\z\q\q\5\j\3\q\w\y\q\s\0\d\r\0\a\n\5\q\j\r\h\f\u\i\d\v\o\o\x\t\e\h\7\8\t\a\d\8\7\w\x\e\r\8\8\f\7\9\q\u\v\q\b\o\r\c\j\a\k\a\u\z\c\2\5\j\0\2\g\2\7\2\u\y\n\z\4\b ]] 00:05:31.953 00:05:31.953 real 0m0.980s 00:05:31.953 user 0m0.681s 00:05:31.953 sys 0m0.380s 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:31.953 ************************************ 00:05:31.953 END TEST dd_rw_offset 00:05:31.953 ************************************ 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:31.953 12:40:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.953 [2024-11-15 12:40:40.498572] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:31.953 [2024-11-15 12:40:40.498677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:05:31.953 { 00:05:31.953 "subsystems": [ 00:05:31.953 { 00:05:31.953 "subsystem": "bdev", 00:05:31.953 "config": [ 00:05:31.953 { 00:05:31.953 "params": { 00:05:31.953 "trtype": "pcie", 00:05:31.953 "traddr": "0000:00:10.0", 00:05:31.953 "name": "Nvme0" 00:05:31.953 }, 00:05:31.953 "method": "bdev_nvme_attach_controller" 00:05:31.953 }, 00:05:31.953 { 00:05:31.953 "method": "bdev_wait_for_examine" 00:05:31.953 } 00:05:31.953 ] 00:05:31.953 } 00:05:31.953 ] 00:05:31.953 } 00:05:32.212 [2024-11-15 12:40:40.639884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.212 [2024-11-15 12:40:40.669267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.212 [2024-11-15 12:40:40.699089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.212  [2024-11-15T12:40:41.141Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:32.471 00:05:32.471 12:40:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.471 00:05:32.471 real 0m13.700s 00:05:32.471 user 0m9.950s 00:05:32.471 sys 0m4.296s 00:05:32.471 12:40:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.471 12:40:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.471 ************************************ 00:05:32.471 END TEST spdk_dd_basic_rw 00:05:32.471 ************************************ 00:05:32.471 12:40:40 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:32.471 12:40:40 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.471 12:40:40 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.471 12:40:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:32.471 ************************************ 00:05:32.471 START TEST spdk_dd_posix 00:05:32.471 ************************************ 00:05:32.471 12:40:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:32.471 * Looking for test storage... 00:05:32.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.471 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.732 --rc genhtml_branch_coverage=1 00:05:32.732 --rc genhtml_function_coverage=1 00:05:32.732 --rc genhtml_legend=1 00:05:32.732 --rc geninfo_all_blocks=1 00:05:32.732 --rc geninfo_unexecuted_blocks=1 00:05:32.732 00:05:32.732 ' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.732 --rc genhtml_branch_coverage=1 00:05:32.732 --rc genhtml_function_coverage=1 00:05:32.732 --rc genhtml_legend=1 00:05:32.732 --rc geninfo_all_blocks=1 00:05:32.732 --rc geninfo_unexecuted_blocks=1 00:05:32.732 00:05:32.732 ' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.732 --rc genhtml_branch_coverage=1 00:05:32.732 --rc genhtml_function_coverage=1 00:05:32.732 --rc genhtml_legend=1 00:05:32.732 --rc geninfo_all_blocks=1 00:05:32.732 --rc geninfo_unexecuted_blocks=1 00:05:32.732 00:05:32.732 ' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.732 --rc genhtml_branch_coverage=1 00:05:32.732 --rc genhtml_function_coverage=1 00:05:32.732 --rc genhtml_legend=1 00:05:32.732 --rc geninfo_all_blocks=1 00:05:32.732 --rc geninfo_unexecuted_blocks=1 00:05:32.732 00:05:32.732 ' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:32.732 * First test run, liburing in use 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:32.732 ************************************ 00:05:32.732 START TEST dd_flag_append 00:05:32.732 ************************************ 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=v3zefsd4u01hihrxv58v1t547tlz4dg5 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=fbda0v47wm49ecx7o9nttejj1nnkw04g 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s v3zefsd4u01hihrxv58v1t547tlz4dg5 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s fbda0v47wm49ecx7o9nttejj1nnkw04g 00:05:32.732 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:32.732 [2024-11-15 12:40:41.219805] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:32.732 [2024-11-15 12:40:41.219902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59977 ] 00:05:32.732 [2024-11-15 12:40:41.366372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.732 [2024-11-15 12:40:41.398105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.992 [2024-11-15 12:40:41.426955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.992  [2024-11-15T12:40:41.662Z] Copying: 32/32 [B] (average 31 kBps) 00:05:32.992 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ fbda0v47wm49ecx7o9nttejj1nnkw04gv3zefsd4u01hihrxv58v1t547tlz4dg5 == \f\b\d\a\0\v\4\7\w\m\4\9\e\c\x\7\o\9\n\t\t\e\j\j\1\n\n\k\w\0\4\g\v\3\z\e\f\s\d\4\u\0\1\h\i\h\r\x\v\5\8\v\1\t\5\4\7\t\l\z\4\d\g\5 ]] 00:05:32.992 00:05:32.992 real 0m0.401s 00:05:32.992 user 0m0.196s 00:05:32.992 sys 0m0.168s 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.992 ************************************ 00:05:32.992 END TEST dd_flag_append 00:05:32.992 ************************************ 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:32.992 ************************************ 00:05:32.992 START TEST dd_flag_directory 00:05:32.992 ************************************ 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:32.992 12:40:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:33.251 [2024-11-15 12:40:41.663951] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:33.251 [2024-11-15 12:40:41.664042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60011 ] 00:05:33.251 [2024-11-15 12:40:41.808889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.251 [2024-11-15 12:40:41.839231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.251 [2024-11-15 12:40:41.870412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.251 [2024-11-15 12:40:41.888316] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:33.251 [2024-11-15 12:40:41.888371] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:33.251 [2024-11-15 12:40:41.888402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.510 [2024-11-15 12:40:41.948221] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:33.510 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:33.510 [2024-11-15 12:40:42.067630] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:33.510 [2024-11-15 12:40:42.067723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60015 ] 00:05:33.769 [2024-11-15 12:40:42.212574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.769 [2024-11-15 12:40:42.248398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.769 [2024-11-15 12:40:42.280908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.769 [2024-11-15 12:40:42.299090] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:33.769 [2024-11-15 12:40:42.299139] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:33.769 [2024-11-15 12:40:42.299172] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.769 [2024-11-15 12:40:42.358349] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.769 00:05:33.769 real 0m0.816s 00:05:33.769 user 0m0.406s 00:05:33.769 sys 0m0.202s 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.769 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:33.769 ************************************ 00:05:33.769 END TEST dd_flag_directory 00:05:33.769 ************************************ 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:34.029 ************************************ 00:05:34.029 START TEST dd_flag_nofollow 00:05:34.029 ************************************ 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:34.029 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.029 [2024-11-15 12:40:42.541532] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:34.029 [2024-11-15 12:40:42.541645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60049 ] 00:05:34.029 [2024-11-15 12:40:42.685546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.288 [2024-11-15 12:40:42.716071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.288 [2024-11-15 12:40:42.745936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.288 [2024-11-15 12:40:42.763427] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:34.288 [2024-11-15 12:40:42.763478] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:34.288 [2024-11-15 12:40:42.763512] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.288 [2024-11-15 12:40:42.821686] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:34.288 12:40:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:34.288 [2024-11-15 12:40:42.938065] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:34.288 [2024-11-15 12:40:42.938157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60053 ] 00:05:34.547 [2024-11-15 12:40:43.081232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.547 [2024-11-15 12:40:43.111277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.547 [2024-11-15 12:40:43.142201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.547 [2024-11-15 12:40:43.160022] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:34.547 [2024-11-15 12:40:43.160078] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:34.547 [2024-11-15 12:40:43.160111] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.806 [2024-11-15 12:40:43.219417] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:34.806 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.806 [2024-11-15 12:40:43.332217] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:34.806 [2024-11-15 12:40:43.332301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:05:34.806 [2024-11-15 12:40:43.469572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.065 [2024-11-15 12:40:43.499086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.065 [2024-11-15 12:40:43.527357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.065  [2024-11-15T12:40:43.735Z] Copying: 512/512 [B] (average 500 kBps) 00:05:35.065 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ok2mmnyvfsqm61jmkuj30l98x8ospvx7zxs1dybs5cprr7hslvhmsky1glezxo68qfnsyvnaustehmih1exklxoybfr254w8d9wckg253azflx1hg5bzud8enuy60euzemp3iem691tpenpl21lbjrrzn0rh33wbn5oz1tyj5gj7n9dpvm8h3581ap8r1881rvu3f35ynu8y62b4rttkrjs88vjxksc8lo2wg0de3j1fvj0yf67ut3h7pyju2z59jcxirtd90gtpdc5h518t98nvltzdg4ftqkw2ccsmj8w4lf8h3mzd5qqw10vl5ckc9xaek4cz0v7c3e0pnaz0peaias8yx0ui8baugcix2hxs65u7a3cwhvanwt1qq7i06m39p2t606r6mv8901fhexkzt33fwaqa1lku07c5e6h9qt6xws6ne4vackk4zejjcaiy2gjbtnrwdu5yw81hctbd1752bc88nqnms4dau6asjjixslxkbgz529ma6u3e == \o\k\2\m\m\n\y\v\f\s\q\m\6\1\j\m\k\u\j\3\0\l\9\8\x\8\o\s\p\v\x\7\z\x\s\1\d\y\b\s\5\c\p\r\r\7\h\s\l\v\h\m\s\k\y\1\g\l\e\z\x\o\6\8\q\f\n\s\y\v\n\a\u\s\t\e\h\m\i\h\1\e\x\k\l\x\o\y\b\f\r\2\5\4\w\8\d\9\w\c\k\g\2\5\3\a\z\f\l\x\1\h\g\5\b\z\u\d\8\e\n\u\y\6\0\e\u\z\e\m\p\3\i\e\m\6\9\1\t\p\e\n\p\l\2\1\l\b\j\r\r\z\n\0\r\h\3\3\w\b\n\5\o\z\1\t\y\j\5\g\j\7\n\9\d\p\v\m\8\h\3\5\8\1\a\p\8\r\1\8\8\1\r\v\u\3\f\3\5\y\n\u\8\y\6\2\b\4\r\t\t\k\r\j\s\8\8\v\j\x\k\s\c\8\l\o\2\w\g\0\d\e\3\j\1\f\v\j\0\y\f\6\7\u\t\3\h\7\p\y\j\u\2\z\5\9\j\c\x\i\r\t\d\9\0\g\t\p\d\c\5\h\5\1\8\t\9\8\n\v\l\t\z\d\g\4\f\t\q\k\w\2\c\c\s\m\j\8\w\4\l\f\8\h\3\m\z\d\5\q\q\w\1\0\v\l\5\c\k\c\9\x\a\e\k\4\c\z\0\v\7\c\3\e\0\p\n\a\z\0\p\e\a\i\a\s\8\y\x\0\u\i\8\b\a\u\g\c\i\x\2\h\x\s\6\5\u\7\a\3\c\w\h\v\a\n\w\t\1\q\q\7\i\0\6\m\3\9\p\2\t\6\0\6\r\6\m\v\8\9\0\1\f\h\e\x\k\z\t\3\3\f\w\a\q\a\1\l\k\u\0\7\c\5\e\6\h\9\q\t\6\x\w\s\6\n\e\4\v\a\c\k\k\4\z\e\j\j\c\a\i\y\2\g\j\b\t\n\r\w\d\u\5\y\w\8\1\h\c\t\b\d\1\7\5\2\b\c\8\8\n\q\n\m\s\4\d\a\u\6\a\s\j\j\i\x\s\l\x\k\b\g\z\5\2\9\m\a\6\u\3\e ]] 00:05:35.065 00:05:35.065 real 0m1.191s 00:05:35.065 user 0m0.595s 00:05:35.065 sys 0m0.358s 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.065 ************************************ 00:05:35.065 END TEST dd_flag_nofollow 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:35.065 ************************************ 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:35.065 ************************************ 00:05:35.065 START TEST dd_flag_noatime 00:05:35.065 ************************************ 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:35.065 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731674443 00:05:35.324 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:35.324 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731674443 00:05:35.324 12:40:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:36.269 12:40:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:36.269 [2024-11-15 12:40:44.802093] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:36.269 [2024-11-15 12:40:44.802190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60103 ] 00:05:36.528 [2024-11-15 12:40:44.946817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.528 [2024-11-15 12:40:44.980008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.528 [2024-11-15 12:40:45.014055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.528  [2024-11-15T12:40:45.198Z] Copying: 512/512 [B] (average 500 kBps) 00:05:36.528 00:05:36.528 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:36.528 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731674443 )) 00:05:36.528 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:36.528 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731674443 )) 00:05:36.528 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:36.786 [2024-11-15 12:40:45.223847] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:36.786 [2024-11-15 12:40:45.223943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:05:36.786 [2024-11-15 12:40:45.372081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.786 [2024-11-15 12:40:45.406789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.786 [2024-11-15 12:40:45.436998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.045  [2024-11-15T12:40:45.715Z] Copying: 512/512 [B] (average 500 kBps) 00:05:37.045 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:37.045 ************************************ 00:05:37.045 END TEST dd_flag_noatime 00:05:37.045 ************************************ 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731674445 )) 00:05:37.045 00:05:37.045 real 0m1.859s 00:05:37.045 user 0m0.422s 00:05:37.045 sys 0m0.373s 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:37.045 ************************************ 00:05:37.045 START TEST dd_flags_misc 00:05:37.045 ************************************ 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:37.045 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:37.046 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.046 12:40:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:37.046 [2024-11-15 12:40:45.698137] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:37.046 [2024-11-15 12:40:45.698704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60145 ] 00:05:37.305 [2024-11-15 12:40:45.841995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.305 [2024-11-15 12:40:45.872457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.305 [2024-11-15 12:40:45.903243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.305  [2024-11-15T12:40:46.234Z] Copying: 512/512 [B] (average 500 kBps) 00:05:37.564 00:05:37.564 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2l0x2ew0gx223k2kv7m3tx1s6kcrmb2ww9faqkupblxcus9g8nb21pbrt4qrqpszn23sle4z7xq3bs0t3640wdhi9qe2xhee8yovsogeh9vl94umwo7rqz3zme9tltmbf1wt8uav7phe0yehf088mx6rhp9k6a2ru5eacr8qat1hw9ll87yperobmr4le50lr2p43awur2s01zwr84iqllenfhntcymymemsgvbxjxf1m7yl7rnsgu2s3g41agx4b9aew97vskb2a5xcc8ttj5ivqb2chx5jf1kvxv0c0njs5kzsef363409w4rd1ykrr3nmvxzf2yj3s129tqpm1y89arplefh605nghjq4a7qal1quo46j39wpy3idqj4ckscr70xddbqyoof47mjh91fwjawumgtpncmmptmftj5vaf2ztpl6lmf2sn0lyuqtbmi3yxxwxnk2gxcn73fol3ybb29l7gd0brr1g6w8xn6al81fibckqzjjjw4tnrz0 == \2\l\0\x\2\e\w\0\g\x\2\2\3\k\2\k\v\7\m\3\t\x\1\s\6\k\c\r\m\b\2\w\w\9\f\a\q\k\u\p\b\l\x\c\u\s\9\g\8\n\b\2\1\p\b\r\t\4\q\r\q\p\s\z\n\2\3\s\l\e\4\z\7\x\q\3\b\s\0\t\3\6\4\0\w\d\h\i\9\q\e\2\x\h\e\e\8\y\o\v\s\o\g\e\h\9\v\l\9\4\u\m\w\o\7\r\q\z\3\z\m\e\9\t\l\t\m\b\f\1\w\t\8\u\a\v\7\p\h\e\0\y\e\h\f\0\8\8\m\x\6\r\h\p\9\k\6\a\2\r\u\5\e\a\c\r\8\q\a\t\1\h\w\9\l\l\8\7\y\p\e\r\o\b\m\r\4\l\e\5\0\l\r\2\p\4\3\a\w\u\r\2\s\0\1\z\w\r\8\4\i\q\l\l\e\n\f\h\n\t\c\y\m\y\m\e\m\s\g\v\b\x\j\x\f\1\m\7\y\l\7\r\n\s\g\u\2\s\3\g\4\1\a\g\x\4\b\9\a\e\w\9\7\v\s\k\b\2\a\5\x\c\c\8\t\t\j\5\i\v\q\b\2\c\h\x\5\j\f\1\k\v\x\v\0\c\0\n\j\s\5\k\z\s\e\f\3\6\3\4\0\9\w\4\r\d\1\y\k\r\r\3\n\m\v\x\z\f\2\y\j\3\s\1\2\9\t\q\p\m\1\y\8\9\a\r\p\l\e\f\h\6\0\5\n\g\h\j\q\4\a\7\q\a\l\1\q\u\o\4\6\j\3\9\w\p\y\3\i\d\q\j\4\c\k\s\c\r\7\0\x\d\d\b\q\y\o\o\f\4\7\m\j\h\9\1\f\w\j\a\w\u\m\g\t\p\n\c\m\m\p\t\m\f\t\j\5\v\a\f\2\z\t\p\l\6\l\m\f\2\s\n\0\l\y\u\q\t\b\m\i\3\y\x\x\w\x\n\k\2\g\x\c\n\7\3\f\o\l\3\y\b\b\2\9\l\7\g\d\0\b\r\r\1\g\6\w\8\x\n\6\a\l\8\1\f\i\b\c\k\q\z\j\j\j\w\4\t\n\r\z\0 ]] 00:05:37.564 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.564 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:37.564 [2024-11-15 12:40:46.097704] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:37.564 [2024-11-15 12:40:46.097788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60149 ] 00:05:37.822 [2024-11-15 12:40:46.238136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.822 [2024-11-15 12:40:46.267238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.822 [2024-11-15 12:40:46.296367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.822  [2024-11-15T12:40:46.492Z] Copying: 512/512 [B] (average 500 kBps) 00:05:37.822 00:05:37.822 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2l0x2ew0gx223k2kv7m3tx1s6kcrmb2ww9faqkupblxcus9g8nb21pbrt4qrqpszn23sle4z7xq3bs0t3640wdhi9qe2xhee8yovsogeh9vl94umwo7rqz3zme9tltmbf1wt8uav7phe0yehf088mx6rhp9k6a2ru5eacr8qat1hw9ll87yperobmr4le50lr2p43awur2s01zwr84iqllenfhntcymymemsgvbxjxf1m7yl7rnsgu2s3g41agx4b9aew97vskb2a5xcc8ttj5ivqb2chx5jf1kvxv0c0njs5kzsef363409w4rd1ykrr3nmvxzf2yj3s129tqpm1y89arplefh605nghjq4a7qal1quo46j39wpy3idqj4ckscr70xddbqyoof47mjh91fwjawumgtpncmmptmftj5vaf2ztpl6lmf2sn0lyuqtbmi3yxxwxnk2gxcn73fol3ybb29l7gd0brr1g6w8xn6al81fibckqzjjjw4tnrz0 == \2\l\0\x\2\e\w\0\g\x\2\2\3\k\2\k\v\7\m\3\t\x\1\s\6\k\c\r\m\b\2\w\w\9\f\a\q\k\u\p\b\l\x\c\u\s\9\g\8\n\b\2\1\p\b\r\t\4\q\r\q\p\s\z\n\2\3\s\l\e\4\z\7\x\q\3\b\s\0\t\3\6\4\0\w\d\h\i\9\q\e\2\x\h\e\e\8\y\o\v\s\o\g\e\h\9\v\l\9\4\u\m\w\o\7\r\q\z\3\z\m\e\9\t\l\t\m\b\f\1\w\t\8\u\a\v\7\p\h\e\0\y\e\h\f\0\8\8\m\x\6\r\h\p\9\k\6\a\2\r\u\5\e\a\c\r\8\q\a\t\1\h\w\9\l\l\8\7\y\p\e\r\o\b\m\r\4\l\e\5\0\l\r\2\p\4\3\a\w\u\r\2\s\0\1\z\w\r\8\4\i\q\l\l\e\n\f\h\n\t\c\y\m\y\m\e\m\s\g\v\b\x\j\x\f\1\m\7\y\l\7\r\n\s\g\u\2\s\3\g\4\1\a\g\x\4\b\9\a\e\w\9\7\v\s\k\b\2\a\5\x\c\c\8\t\t\j\5\i\v\q\b\2\c\h\x\5\j\f\1\k\v\x\v\0\c\0\n\j\s\5\k\z\s\e\f\3\6\3\4\0\9\w\4\r\d\1\y\k\r\r\3\n\m\v\x\z\f\2\y\j\3\s\1\2\9\t\q\p\m\1\y\8\9\a\r\p\l\e\f\h\6\0\5\n\g\h\j\q\4\a\7\q\a\l\1\q\u\o\4\6\j\3\9\w\p\y\3\i\d\q\j\4\c\k\s\c\r\7\0\x\d\d\b\q\y\o\o\f\4\7\m\j\h\9\1\f\w\j\a\w\u\m\g\t\p\n\c\m\m\p\t\m\f\t\j\5\v\a\f\2\z\t\p\l\6\l\m\f\2\s\n\0\l\y\u\q\t\b\m\i\3\y\x\x\w\x\n\k\2\g\x\c\n\7\3\f\o\l\3\y\b\b\2\9\l\7\g\d\0\b\r\r\1\g\6\w\8\x\n\6\a\l\8\1\f\i\b\c\k\q\z\j\j\j\w\4\t\n\r\z\0 ]] 00:05:37.822 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.822 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:38.081 [2024-11-15 12:40:46.495256] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:38.081 [2024-11-15 12:40:46.495337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60157 ] 00:05:38.081 [2024-11-15 12:40:46.640592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.081 [2024-11-15 12:40:46.669122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.081 [2024-11-15 12:40:46.696872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.081  [2024-11-15T12:40:47.010Z] Copying: 512/512 [B] (average 100 kBps) 00:05:38.340 00:05:38.340 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2l0x2ew0gx223k2kv7m3tx1s6kcrmb2ww9faqkupblxcus9g8nb21pbrt4qrqpszn23sle4z7xq3bs0t3640wdhi9qe2xhee8yovsogeh9vl94umwo7rqz3zme9tltmbf1wt8uav7phe0yehf088mx6rhp9k6a2ru5eacr8qat1hw9ll87yperobmr4le50lr2p43awur2s01zwr84iqllenfhntcymymemsgvbxjxf1m7yl7rnsgu2s3g41agx4b9aew97vskb2a5xcc8ttj5ivqb2chx5jf1kvxv0c0njs5kzsef363409w4rd1ykrr3nmvxzf2yj3s129tqpm1y89arplefh605nghjq4a7qal1quo46j39wpy3idqj4ckscr70xddbqyoof47mjh91fwjawumgtpncmmptmftj5vaf2ztpl6lmf2sn0lyuqtbmi3yxxwxnk2gxcn73fol3ybb29l7gd0brr1g6w8xn6al81fibckqzjjjw4tnrz0 == \2\l\0\x\2\e\w\0\g\x\2\2\3\k\2\k\v\7\m\3\t\x\1\s\6\k\c\r\m\b\2\w\w\9\f\a\q\k\u\p\b\l\x\c\u\s\9\g\8\n\b\2\1\p\b\r\t\4\q\r\q\p\s\z\n\2\3\s\l\e\4\z\7\x\q\3\b\s\0\t\3\6\4\0\w\d\h\i\9\q\e\2\x\h\e\e\8\y\o\v\s\o\g\e\h\9\v\l\9\4\u\m\w\o\7\r\q\z\3\z\m\e\9\t\l\t\m\b\f\1\w\t\8\u\a\v\7\p\h\e\0\y\e\h\f\0\8\8\m\x\6\r\h\p\9\k\6\a\2\r\u\5\e\a\c\r\8\q\a\t\1\h\w\9\l\l\8\7\y\p\e\r\o\b\m\r\4\l\e\5\0\l\r\2\p\4\3\a\w\u\r\2\s\0\1\z\w\r\8\4\i\q\l\l\e\n\f\h\n\t\c\y\m\y\m\e\m\s\g\v\b\x\j\x\f\1\m\7\y\l\7\r\n\s\g\u\2\s\3\g\4\1\a\g\x\4\b\9\a\e\w\9\7\v\s\k\b\2\a\5\x\c\c\8\t\t\j\5\i\v\q\b\2\c\h\x\5\j\f\1\k\v\x\v\0\c\0\n\j\s\5\k\z\s\e\f\3\6\3\4\0\9\w\4\r\d\1\y\k\r\r\3\n\m\v\x\z\f\2\y\j\3\s\1\2\9\t\q\p\m\1\y\8\9\a\r\p\l\e\f\h\6\0\5\n\g\h\j\q\4\a\7\q\a\l\1\q\u\o\4\6\j\3\9\w\p\y\3\i\d\q\j\4\c\k\s\c\r\7\0\x\d\d\b\q\y\o\o\f\4\7\m\j\h\9\1\f\w\j\a\w\u\m\g\t\p\n\c\m\m\p\t\m\f\t\j\5\v\a\f\2\z\t\p\l\6\l\m\f\2\s\n\0\l\y\u\q\t\b\m\i\3\y\x\x\w\x\n\k\2\g\x\c\n\7\3\f\o\l\3\y\b\b\2\9\l\7\g\d\0\b\r\r\1\g\6\w\8\x\n\6\a\l\8\1\f\i\b\c\k\q\z\j\j\j\w\4\t\n\r\z\0 ]] 00:05:38.340 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:38.340 12:40:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:38.340 [2024-11-15 12:40:46.892291] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:38.340 [2024-11-15 12:40:46.892381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60168 ] 00:05:38.599 [2024-11-15 12:40:47.032453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.599 [2024-11-15 12:40:47.061686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.599 [2024-11-15 12:40:47.090796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.599  [2024-11-15T12:40:47.269Z] Copying: 512/512 [B] (average 500 kBps) 00:05:38.599 00:05:38.599 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2l0x2ew0gx223k2kv7m3tx1s6kcrmb2ww9faqkupblxcus9g8nb21pbrt4qrqpszn23sle4z7xq3bs0t3640wdhi9qe2xhee8yovsogeh9vl94umwo7rqz3zme9tltmbf1wt8uav7phe0yehf088mx6rhp9k6a2ru5eacr8qat1hw9ll87yperobmr4le50lr2p43awur2s01zwr84iqllenfhntcymymemsgvbxjxf1m7yl7rnsgu2s3g41agx4b9aew97vskb2a5xcc8ttj5ivqb2chx5jf1kvxv0c0njs5kzsef363409w4rd1ykrr3nmvxzf2yj3s129tqpm1y89arplefh605nghjq4a7qal1quo46j39wpy3idqj4ckscr70xddbqyoof47mjh91fwjawumgtpncmmptmftj5vaf2ztpl6lmf2sn0lyuqtbmi3yxxwxnk2gxcn73fol3ybb29l7gd0brr1g6w8xn6al81fibckqzjjjw4tnrz0 == \2\l\0\x\2\e\w\0\g\x\2\2\3\k\2\k\v\7\m\3\t\x\1\s\6\k\c\r\m\b\2\w\w\9\f\a\q\k\u\p\b\l\x\c\u\s\9\g\8\n\b\2\1\p\b\r\t\4\q\r\q\p\s\z\n\2\3\s\l\e\4\z\7\x\q\3\b\s\0\t\3\6\4\0\w\d\h\i\9\q\e\2\x\h\e\e\8\y\o\v\s\o\g\e\h\9\v\l\9\4\u\m\w\o\7\r\q\z\3\z\m\e\9\t\l\t\m\b\f\1\w\t\8\u\a\v\7\p\h\e\0\y\e\h\f\0\8\8\m\x\6\r\h\p\9\k\6\a\2\r\u\5\e\a\c\r\8\q\a\t\1\h\w\9\l\l\8\7\y\p\e\r\o\b\m\r\4\l\e\5\0\l\r\2\p\4\3\a\w\u\r\2\s\0\1\z\w\r\8\4\i\q\l\l\e\n\f\h\n\t\c\y\m\y\m\e\m\s\g\v\b\x\j\x\f\1\m\7\y\l\7\r\n\s\g\u\2\s\3\g\4\1\a\g\x\4\b\9\a\e\w\9\7\v\s\k\b\2\a\5\x\c\c\8\t\t\j\5\i\v\q\b\2\c\h\x\5\j\f\1\k\v\x\v\0\c\0\n\j\s\5\k\z\s\e\f\3\6\3\4\0\9\w\4\r\d\1\y\k\r\r\3\n\m\v\x\z\f\2\y\j\3\s\1\2\9\t\q\p\m\1\y\8\9\a\r\p\l\e\f\h\6\0\5\n\g\h\j\q\4\a\7\q\a\l\1\q\u\o\4\6\j\3\9\w\p\y\3\i\d\q\j\4\c\k\s\c\r\7\0\x\d\d\b\q\y\o\o\f\4\7\m\j\h\9\1\f\w\j\a\w\u\m\g\t\p\n\c\m\m\p\t\m\f\t\j\5\v\a\f\2\z\t\p\l\6\l\m\f\2\s\n\0\l\y\u\q\t\b\m\i\3\y\x\x\w\x\n\k\2\g\x\c\n\7\3\f\o\l\3\y\b\b\2\9\l\7\g\d\0\b\r\r\1\g\6\w\8\x\n\6\a\l\8\1\f\i\b\c\k\q\z\j\j\j\w\4\t\n\r\z\0 ]] 00:05:38.599 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:38.599 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:38.599 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:38.599 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:38.599 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:38.599 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:38.858 [2024-11-15 12:40:47.297188] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:38.858 [2024-11-15 12:40:47.297278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60172 ] 00:05:38.858 [2024-11-15 12:40:47.437573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.858 [2024-11-15 12:40:47.469064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.858 [2024-11-15 12:40:47.502799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.858  [2024-11-15T12:40:47.787Z] Copying: 512/512 [B] (average 500 kBps) 00:05:39.117 00:05:39.117 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7hafedxxigwfzh9ev9t4it4copexwsgcyil9gn5orzxzou6r9bd9etgsrbd3bvrew6uv1yki638id71ss6mmamkv7w0kzejey0k22txqupg218li453g1wauwn48vcu4ve7qbe73qvysvxevkk9w3mdo47mu1aahrcrdwbu3p5hdllk1u93jyigimo33ioe3tabsf5fkpm4zjo6shj2bjypyumakuvljokiyms2bfh52nd7teas7u42g5ffbyu84girb6yevk1t7udfjaqrgbs2qb4fsprujqhe9iluvnu4e4cokdqq8pnckcny0qrs89jjvezaxy8dk6v2psl47y4m3eo0hezppl2sjbebdxausubswyrnf016yr0n6n58ewt7wgewsiwjuhn5670nvnvmhsy2bx3mfjcaofp2ph965dfih531treu8ueunkrttk0047ecsmd2ery95gtfvcyy4oanzhx1gol2zr9h7zeruuzs88ve7m0n15drqdfqn == \7\h\a\f\e\d\x\x\i\g\w\f\z\h\9\e\v\9\t\4\i\t\4\c\o\p\e\x\w\s\g\c\y\i\l\9\g\n\5\o\r\z\x\z\o\u\6\r\9\b\d\9\e\t\g\s\r\b\d\3\b\v\r\e\w\6\u\v\1\y\k\i\6\3\8\i\d\7\1\s\s\6\m\m\a\m\k\v\7\w\0\k\z\e\j\e\y\0\k\2\2\t\x\q\u\p\g\2\1\8\l\i\4\5\3\g\1\w\a\u\w\n\4\8\v\c\u\4\v\e\7\q\b\e\7\3\q\v\y\s\v\x\e\v\k\k\9\w\3\m\d\o\4\7\m\u\1\a\a\h\r\c\r\d\w\b\u\3\p\5\h\d\l\l\k\1\u\9\3\j\y\i\g\i\m\o\3\3\i\o\e\3\t\a\b\s\f\5\f\k\p\m\4\z\j\o\6\s\h\j\2\b\j\y\p\y\u\m\a\k\u\v\l\j\o\k\i\y\m\s\2\b\f\h\5\2\n\d\7\t\e\a\s\7\u\4\2\g\5\f\f\b\y\u\8\4\g\i\r\b\6\y\e\v\k\1\t\7\u\d\f\j\a\q\r\g\b\s\2\q\b\4\f\s\p\r\u\j\q\h\e\9\i\l\u\v\n\u\4\e\4\c\o\k\d\q\q\8\p\n\c\k\c\n\y\0\q\r\s\8\9\j\j\v\e\z\a\x\y\8\d\k\6\v\2\p\s\l\4\7\y\4\m\3\e\o\0\h\e\z\p\p\l\2\s\j\b\e\b\d\x\a\u\s\u\b\s\w\y\r\n\f\0\1\6\y\r\0\n\6\n\5\8\e\w\t\7\w\g\e\w\s\i\w\j\u\h\n\5\6\7\0\n\v\n\v\m\h\s\y\2\b\x\3\m\f\j\c\a\o\f\p\2\p\h\9\6\5\d\f\i\h\5\3\1\t\r\e\u\8\u\e\u\n\k\r\t\t\k\0\0\4\7\e\c\s\m\d\2\e\r\y\9\5\g\t\f\v\c\y\y\4\o\a\n\z\h\x\1\g\o\l\2\z\r\9\h\7\z\e\r\u\u\z\s\8\8\v\e\7\m\0\n\1\5\d\r\q\d\f\q\n ]] 00:05:39.117 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:39.117 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:39.117 [2024-11-15 12:40:47.678545] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:39.117 [2024-11-15 12:40:47.678642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:05:39.376 [2024-11-15 12:40:47.817149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.376 [2024-11-15 12:40:47.844482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.376 [2024-11-15 12:40:47.871128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.376  [2024-11-15T12:40:48.046Z] Copying: 512/512 [B] (average 500 kBps) 00:05:39.376 00:05:39.376 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7hafedxxigwfzh9ev9t4it4copexwsgcyil9gn5orzxzou6r9bd9etgsrbd3bvrew6uv1yki638id71ss6mmamkv7w0kzejey0k22txqupg218li453g1wauwn48vcu4ve7qbe73qvysvxevkk9w3mdo47mu1aahrcrdwbu3p5hdllk1u93jyigimo33ioe3tabsf5fkpm4zjo6shj2bjypyumakuvljokiyms2bfh52nd7teas7u42g5ffbyu84girb6yevk1t7udfjaqrgbs2qb4fsprujqhe9iluvnu4e4cokdqq8pnckcny0qrs89jjvezaxy8dk6v2psl47y4m3eo0hezppl2sjbebdxausubswyrnf016yr0n6n58ewt7wgewsiwjuhn5670nvnvmhsy2bx3mfjcaofp2ph965dfih531treu8ueunkrttk0047ecsmd2ery95gtfvcyy4oanzhx1gol2zr9h7zeruuzs88ve7m0n15drqdfqn == \7\h\a\f\e\d\x\x\i\g\w\f\z\h\9\e\v\9\t\4\i\t\4\c\o\p\e\x\w\s\g\c\y\i\l\9\g\n\5\o\r\z\x\z\o\u\6\r\9\b\d\9\e\t\g\s\r\b\d\3\b\v\r\e\w\6\u\v\1\y\k\i\6\3\8\i\d\7\1\s\s\6\m\m\a\m\k\v\7\w\0\k\z\e\j\e\y\0\k\2\2\t\x\q\u\p\g\2\1\8\l\i\4\5\3\g\1\w\a\u\w\n\4\8\v\c\u\4\v\e\7\q\b\e\7\3\q\v\y\s\v\x\e\v\k\k\9\w\3\m\d\o\4\7\m\u\1\a\a\h\r\c\r\d\w\b\u\3\p\5\h\d\l\l\k\1\u\9\3\j\y\i\g\i\m\o\3\3\i\o\e\3\t\a\b\s\f\5\f\k\p\m\4\z\j\o\6\s\h\j\2\b\j\y\p\y\u\m\a\k\u\v\l\j\o\k\i\y\m\s\2\b\f\h\5\2\n\d\7\t\e\a\s\7\u\4\2\g\5\f\f\b\y\u\8\4\g\i\r\b\6\y\e\v\k\1\t\7\u\d\f\j\a\q\r\g\b\s\2\q\b\4\f\s\p\r\u\j\q\h\e\9\i\l\u\v\n\u\4\e\4\c\o\k\d\q\q\8\p\n\c\k\c\n\y\0\q\r\s\8\9\j\j\v\e\z\a\x\y\8\d\k\6\v\2\p\s\l\4\7\y\4\m\3\e\o\0\h\e\z\p\p\l\2\s\j\b\e\b\d\x\a\u\s\u\b\s\w\y\r\n\f\0\1\6\y\r\0\n\6\n\5\8\e\w\t\7\w\g\e\w\s\i\w\j\u\h\n\5\6\7\0\n\v\n\v\m\h\s\y\2\b\x\3\m\f\j\c\a\o\f\p\2\p\h\9\6\5\d\f\i\h\5\3\1\t\r\e\u\8\u\e\u\n\k\r\t\t\k\0\0\4\7\e\c\s\m\d\2\e\r\y\9\5\g\t\f\v\c\y\y\4\o\a\n\z\h\x\1\g\o\l\2\z\r\9\h\7\z\e\r\u\u\z\s\8\8\v\e\7\m\0\n\1\5\d\r\q\d\f\q\n ]] 00:05:39.376 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:39.376 12:40:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:39.636 [2024-11-15 12:40:48.049808] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:39.636 [2024-11-15 12:40:48.049898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:05:39.636 [2024-11-15 12:40:48.192319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.636 [2024-11-15 12:40:48.221854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.636 [2024-11-15 12:40:48.251864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.636  [2024-11-15T12:40:48.564Z] Copying: 512/512 [B] (average 250 kBps) 00:05:39.894 00:05:39.895 12:40:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7hafedxxigwfzh9ev9t4it4copexwsgcyil9gn5orzxzou6r9bd9etgsrbd3bvrew6uv1yki638id71ss6mmamkv7w0kzejey0k22txqupg218li453g1wauwn48vcu4ve7qbe73qvysvxevkk9w3mdo47mu1aahrcrdwbu3p5hdllk1u93jyigimo33ioe3tabsf5fkpm4zjo6shj2bjypyumakuvljokiyms2bfh52nd7teas7u42g5ffbyu84girb6yevk1t7udfjaqrgbs2qb4fsprujqhe9iluvnu4e4cokdqq8pnckcny0qrs89jjvezaxy8dk6v2psl47y4m3eo0hezppl2sjbebdxausubswyrnf016yr0n6n58ewt7wgewsiwjuhn5670nvnvmhsy2bx3mfjcaofp2ph965dfih531treu8ueunkrttk0047ecsmd2ery95gtfvcyy4oanzhx1gol2zr9h7zeruuzs88ve7m0n15drqdfqn == \7\h\a\f\e\d\x\x\i\g\w\f\z\h\9\e\v\9\t\4\i\t\4\c\o\p\e\x\w\s\g\c\y\i\l\9\g\n\5\o\r\z\x\z\o\u\6\r\9\b\d\9\e\t\g\s\r\b\d\3\b\v\r\e\w\6\u\v\1\y\k\i\6\3\8\i\d\7\1\s\s\6\m\m\a\m\k\v\7\w\0\k\z\e\j\e\y\0\k\2\2\t\x\q\u\p\g\2\1\8\l\i\4\5\3\g\1\w\a\u\w\n\4\8\v\c\u\4\v\e\7\q\b\e\7\3\q\v\y\s\v\x\e\v\k\k\9\w\3\m\d\o\4\7\m\u\1\a\a\h\r\c\r\d\w\b\u\3\p\5\h\d\l\l\k\1\u\9\3\j\y\i\g\i\m\o\3\3\i\o\e\3\t\a\b\s\f\5\f\k\p\m\4\z\j\o\6\s\h\j\2\b\j\y\p\y\u\m\a\k\u\v\l\j\o\k\i\y\m\s\2\b\f\h\5\2\n\d\7\t\e\a\s\7\u\4\2\g\5\f\f\b\y\u\8\4\g\i\r\b\6\y\e\v\k\1\t\7\u\d\f\j\a\q\r\g\b\s\2\q\b\4\f\s\p\r\u\j\q\h\e\9\i\l\u\v\n\u\4\e\4\c\o\k\d\q\q\8\p\n\c\k\c\n\y\0\q\r\s\8\9\j\j\v\e\z\a\x\y\8\d\k\6\v\2\p\s\l\4\7\y\4\m\3\e\o\0\h\e\z\p\p\l\2\s\j\b\e\b\d\x\a\u\s\u\b\s\w\y\r\n\f\0\1\6\y\r\0\n\6\n\5\8\e\w\t\7\w\g\e\w\s\i\w\j\u\h\n\5\6\7\0\n\v\n\v\m\h\s\y\2\b\x\3\m\f\j\c\a\o\f\p\2\p\h\9\6\5\d\f\i\h\5\3\1\t\r\e\u\8\u\e\u\n\k\r\t\t\k\0\0\4\7\e\c\s\m\d\2\e\r\y\9\5\g\t\f\v\c\y\y\4\o\a\n\z\h\x\1\g\o\l\2\z\r\9\h\7\z\e\r\u\u\z\s\8\8\v\e\7\m\0\n\1\5\d\r\q\d\f\q\n ]] 00:05:39.895 12:40:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:39.895 12:40:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:39.895 [2024-11-15 12:40:48.448352] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:39.895 [2024-11-15 12:40:48.448443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:05:40.153 [2024-11-15 12:40:48.592802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.153 [2024-11-15 12:40:48.622051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.153 [2024-11-15 12:40:48.651792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.153  [2024-11-15T12:40:48.823Z] Copying: 512/512 [B] (average 250 kBps) 00:05:40.153 00:05:40.154 12:40:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7hafedxxigwfzh9ev9t4it4copexwsgcyil9gn5orzxzou6r9bd9etgsrbd3bvrew6uv1yki638id71ss6mmamkv7w0kzejey0k22txqupg218li453g1wauwn48vcu4ve7qbe73qvysvxevkk9w3mdo47mu1aahrcrdwbu3p5hdllk1u93jyigimo33ioe3tabsf5fkpm4zjo6shj2bjypyumakuvljokiyms2bfh52nd7teas7u42g5ffbyu84girb6yevk1t7udfjaqrgbs2qb4fsprujqhe9iluvnu4e4cokdqq8pnckcny0qrs89jjvezaxy8dk6v2psl47y4m3eo0hezppl2sjbebdxausubswyrnf016yr0n6n58ewt7wgewsiwjuhn5670nvnvmhsy2bx3mfjcaofp2ph965dfih531treu8ueunkrttk0047ecsmd2ery95gtfvcyy4oanzhx1gol2zr9h7zeruuzs88ve7m0n15drqdfqn == \7\h\a\f\e\d\x\x\i\g\w\f\z\h\9\e\v\9\t\4\i\t\4\c\o\p\e\x\w\s\g\c\y\i\l\9\g\n\5\o\r\z\x\z\o\u\6\r\9\b\d\9\e\t\g\s\r\b\d\3\b\v\r\e\w\6\u\v\1\y\k\i\6\3\8\i\d\7\1\s\s\6\m\m\a\m\k\v\7\w\0\k\z\e\j\e\y\0\k\2\2\t\x\q\u\p\g\2\1\8\l\i\4\5\3\g\1\w\a\u\w\n\4\8\v\c\u\4\v\e\7\q\b\e\7\3\q\v\y\s\v\x\e\v\k\k\9\w\3\m\d\o\4\7\m\u\1\a\a\h\r\c\r\d\w\b\u\3\p\5\h\d\l\l\k\1\u\9\3\j\y\i\g\i\m\o\3\3\i\o\e\3\t\a\b\s\f\5\f\k\p\m\4\z\j\o\6\s\h\j\2\b\j\y\p\y\u\m\a\k\u\v\l\j\o\k\i\y\m\s\2\b\f\h\5\2\n\d\7\t\e\a\s\7\u\4\2\g\5\f\f\b\y\u\8\4\g\i\r\b\6\y\e\v\k\1\t\7\u\d\f\j\a\q\r\g\b\s\2\q\b\4\f\s\p\r\u\j\q\h\e\9\i\l\u\v\n\u\4\e\4\c\o\k\d\q\q\8\p\n\c\k\c\n\y\0\q\r\s\8\9\j\j\v\e\z\a\x\y\8\d\k\6\v\2\p\s\l\4\7\y\4\m\3\e\o\0\h\e\z\p\p\l\2\s\j\b\e\b\d\x\a\u\s\u\b\s\w\y\r\n\f\0\1\6\y\r\0\n\6\n\5\8\e\w\t\7\w\g\e\w\s\i\w\j\u\h\n\5\6\7\0\n\v\n\v\m\h\s\y\2\b\x\3\m\f\j\c\a\o\f\p\2\p\h\9\6\5\d\f\i\h\5\3\1\t\r\e\u\8\u\e\u\n\k\r\t\t\k\0\0\4\7\e\c\s\m\d\2\e\r\y\9\5\g\t\f\v\c\y\y\4\o\a\n\z\h\x\1\g\o\l\2\z\r\9\h\7\z\e\r\u\u\z\s\8\8\v\e\7\m\0\n\1\5\d\r\q\d\f\q\n ]] 00:05:40.154 00:05:40.154 real 0m3.157s 00:05:40.154 user 0m1.587s 00:05:40.154 sys 0m1.351s 00:05:40.154 12:40:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.154 ************************************ 00:05:40.154 12:40:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:40.154 END TEST dd_flags_misc 00:05:40.154 ************************************ 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:05:40.413 * Second test run, disabling liburing, forcing AIO 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:40.413 ************************************ 00:05:40.413 START TEST dd_flag_append_forced_aio 00:05:40.413 ************************************ 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=8r04zqxnuw5ujtpxoy7wggfuldy2sdg5 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=ul6uezefqoets9y2p9qbof3sm8htd4vx 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 8r04zqxnuw5ujtpxoy7wggfuldy2sdg5 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s ul6uezefqoets9y2p9qbof3sm8htd4vx 00:05:40.413 12:40:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:40.413 [2024-11-15 12:40:48.907209] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:40.413 [2024-11-15 12:40:48.908002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:05:40.413 [2024-11-15 12:40:49.052829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.672 [2024-11-15 12:40:49.083834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.672 [2024-11-15 12:40:49.115066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.672  [2024-11-15T12:40:49.342Z] Copying: 32/32 [B] (average 31 kBps) 00:05:40.672 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ ul6uezefqoets9y2p9qbof3sm8htd4vx8r04zqxnuw5ujtpxoy7wggfuldy2sdg5 == \u\l\6\u\e\z\e\f\q\o\e\t\s\9\y\2\p\9\q\b\o\f\3\s\m\8\h\t\d\4\v\x\8\r\0\4\z\q\x\n\u\w\5\u\j\t\p\x\o\y\7\w\g\g\f\u\l\d\y\2\s\d\g\5 ]] 00:05:40.672 00:05:40.672 real 0m0.424s 00:05:40.672 user 0m0.210s 00:05:40.672 sys 0m0.093s 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.672 ************************************ 00:05:40.672 END TEST dd_flag_append_forced_aio 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:40.672 ************************************ 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:40.672 ************************************ 00:05:40.672 START TEST dd_flag_directory_forced_aio 00:05:40.672 ************************************ 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:40.672 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.931 [2024-11-15 12:40:49.384583] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:40.931 [2024-11-15 12:40:49.384707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60250 ] 00:05:40.931 [2024-11-15 12:40:49.529388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.931 [2024-11-15 12:40:49.565932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.931 [2024-11-15 12:40:49.599175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.190 [2024-11-15 12:40:49.616946] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:41.190 [2024-11-15 12:40:49.616998] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:41.190 [2024-11-15 12:40:49.617028] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.190 [2024-11-15 12:40:49.675385] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.190 12:40:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:41.190 [2024-11-15 12:40:49.802095] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:41.190 [2024-11-15 12:40:49.802189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:05:41.449 [2024-11-15 12:40:49.944891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.449 [2024-11-15 12:40:49.974663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.449 [2024-11-15 12:40:50.005032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.449 [2024-11-15 12:40:50.024816] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:41.449 [2024-11-15 12:40:50.024880] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:41.449 [2024-11-15 12:40:50.024898] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.449 [2024-11-15 12:40:50.083669] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.708 00:05:41.708 real 0m0.810s 00:05:41.708 user 0m0.400s 00:05:41.708 sys 0m0.199s 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:41.708 ************************************ 00:05:41.708 END TEST dd_flag_directory_forced_aio 00:05:41.708 ************************************ 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:41.708 ************************************ 00:05:41.708 START TEST dd_flag_nofollow_forced_aio 00:05:41.708 ************************************ 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.708 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.708 [2024-11-15 12:40:50.257187] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:41.708 [2024-11-15 12:40:50.257271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60288 ] 00:05:41.967 [2024-11-15 12:40:50.401598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.967 [2024-11-15 12:40:50.432499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.967 [2024-11-15 12:40:50.464323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.967 [2024-11-15 12:40:50.482697] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:41.967 [2024-11-15 12:40:50.482748] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:41.967 [2024-11-15 12:40:50.482763] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.967 [2024-11-15 12:40:50.542121] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.967 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:42.226 [2024-11-15 12:40:50.668838] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:42.226 [2024-11-15 12:40:50.668924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:05:42.226 [2024-11-15 12:40:50.813049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.226 [2024-11-15 12:40:50.841636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.226 [2024-11-15 12:40:50.870913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.226 [2024-11-15 12:40:50.887829] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:42.226 [2024-11-15 12:40:50.887908] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:42.226 [2024-11-15 12:40:50.887924] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.485 [2024-11-15 12:40:50.946053] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:42.485 12:40:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:42.485 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.485 [2024-11-15 12:40:51.042957] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:42.485 [2024-11-15 12:40:51.043050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60305 ] 00:05:42.744 [2024-11-15 12:40:51.177327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.744 [2024-11-15 12:40:51.204480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.744 [2024-11-15 12:40:51.231395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.744  [2024-11-15T12:40:51.414Z] Copying: 512/512 [B] (average 500 kBps) 00:05:42.744 00:05:42.744 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ t0dc91qwqb21mmn7ey68e59ogykan77n3vy2g9xrf73f60vgxvz2l2ynsspjuyvnbnkfj5jsn74fm9cusbnpl582lnluzpkfialqb383r6c81xfjobg7r6ffromw4rfahnci6m50vqr4akzn4uszhrwkhmslzk7n9jkdjj4r7d3vghdub2797mfcc79k6ult19st8l2ez8yub4pkfyjdzbfhnr68k0oz26kubepon2uk58w5q5ry1wdrgfntqf0injozq72gt2ikkxwn78eeckm4yoxe4vrhljx8inef1634h919n6tcg48kyf2c3ixqv95z2ag6kjutrv133gqty59pt9lh4yzj40dbtak91jqhlgn01fah1h90ocstyalguhpdox3usoo0j9nfa2ebpbktbwex5at4sgwz90aingoo5c0u6wr8ndrok1zx4vfwjbgcapod6dewt17eff1dy0z5d6st97trkwaxzb0ubsseh3r9btwsbt9nyvj5he71 == \t\0\d\c\9\1\q\w\q\b\2\1\m\m\n\7\e\y\6\8\e\5\9\o\g\y\k\a\n\7\7\n\3\v\y\2\g\9\x\r\f\7\3\f\6\0\v\g\x\v\z\2\l\2\y\n\s\s\p\j\u\y\v\n\b\n\k\f\j\5\j\s\n\7\4\f\m\9\c\u\s\b\n\p\l\5\8\2\l\n\l\u\z\p\k\f\i\a\l\q\b\3\8\3\r\6\c\8\1\x\f\j\o\b\g\7\r\6\f\f\r\o\m\w\4\r\f\a\h\n\c\i\6\m\5\0\v\q\r\4\a\k\z\n\4\u\s\z\h\r\w\k\h\m\s\l\z\k\7\n\9\j\k\d\j\j\4\r\7\d\3\v\g\h\d\u\b\2\7\9\7\m\f\c\c\7\9\k\6\u\l\t\1\9\s\t\8\l\2\e\z\8\y\u\b\4\p\k\f\y\j\d\z\b\f\h\n\r\6\8\k\0\o\z\2\6\k\u\b\e\p\o\n\2\u\k\5\8\w\5\q\5\r\y\1\w\d\r\g\f\n\t\q\f\0\i\n\j\o\z\q\7\2\g\t\2\i\k\k\x\w\n\7\8\e\e\c\k\m\4\y\o\x\e\4\v\r\h\l\j\x\8\i\n\e\f\1\6\3\4\h\9\1\9\n\6\t\c\g\4\8\k\y\f\2\c\3\i\x\q\v\9\5\z\2\a\g\6\k\j\u\t\r\v\1\3\3\g\q\t\y\5\9\p\t\9\l\h\4\y\z\j\4\0\d\b\t\a\k\9\1\j\q\h\l\g\n\0\1\f\a\h\1\h\9\0\o\c\s\t\y\a\l\g\u\h\p\d\o\x\3\u\s\o\o\0\j\9\n\f\a\2\e\b\p\b\k\t\b\w\e\x\5\a\t\4\s\g\w\z\9\0\a\i\n\g\o\o\5\c\0\u\6\w\r\8\n\d\r\o\k\1\z\x\4\v\f\w\j\b\g\c\a\p\o\d\6\d\e\w\t\1\7\e\f\f\1\d\y\0\z\5\d\6\s\t\9\7\t\r\k\w\a\x\z\b\0\u\b\s\s\e\h\3\r\9\b\t\w\s\b\t\9\n\y\v\j\5\h\e\7\1 ]] 00:05:42.744 00:05:42.744 real 0m1.197s 00:05:42.744 user 0m0.595s 00:05:42.744 sys 0m0.277s 00:05:42.744 ************************************ 00:05:42.744 END TEST dd_flag_nofollow_forced_aio 00:05:42.744 ************************************ 00:05:42.744 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.745 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:43.004 ************************************ 00:05:43.004 START TEST dd_flag_noatime_forced_aio 00:05:43.004 ************************************ 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731674451 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731674451 00:05:43.004 12:40:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:05:43.942 12:40:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.942 [2024-11-15 12:40:52.520957] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:43.942 [2024-11-15 12:40:52.521075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60340 ] 00:05:44.201 [2024-11-15 12:40:52.667595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.201 [2024-11-15 12:40:52.703381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.201 [2024-11-15 12:40:52.738578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.201  [2024-11-15T12:40:53.131Z] Copying: 512/512 [B] (average 500 kBps) 00:05:44.461 00:05:44.461 12:40:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:44.461 12:40:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731674451 )) 00:05:44.461 12:40:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:44.461 12:40:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731674451 )) 00:05:44.461 12:40:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:44.461 [2024-11-15 12:40:52.955634] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:44.461 [2024-11-15 12:40:52.955732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60357 ] 00:05:44.461 [2024-11-15 12:40:53.101006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.720 [2024-11-15 12:40:53.131702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.720 [2024-11-15 12:40:53.162914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.720  [2024-11-15T12:40:53.390Z] Copying: 512/512 [B] (average 500 kBps) 00:05:44.720 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731674453 )) 00:05:44.720 00:05:44.720 real 0m1.887s 00:05:44.720 user 0m0.439s 00:05:44.720 sys 0m0.211s 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:44.720 ************************************ 00:05:44.720 END TEST dd_flag_noatime_forced_aio 00:05:44.720 ************************************ 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:44.720 ************************************ 00:05:44.720 START TEST dd_flags_misc_forced_aio 00:05:44.720 ************************************ 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:44.720 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:45.016 [2024-11-15 12:40:53.428091] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:45.016 [2024-11-15 12:40:53.428177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60378 ] 00:05:45.016 [2024-11-15 12:40:53.565053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.016 [2024-11-15 12:40:53.594463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.016 [2024-11-15 12:40:53.623048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.016  [2024-11-15T12:40:53.944Z] Copying: 512/512 [B] (average 500 kBps) 00:05:45.274 00:05:45.274 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5puhoospweq88gmy7g7sc0o566ncf5x1gvdqbrzx495hlbexcnr17fjj4576o01a7mlqmbfcp1n4shchyd9wnyxn8mre5y70jqzep0jctis4catup0gyvjxyrrm19kiej0brx5a6v1p2l7dipp94rzk929cxkziivfxjijpzyldp8ujt3yrjs9u33r2sn2hc44smkcdql5ykschsz1dnqvgavakups7sz9r6uivs6pi7bhm65rryomapzsn0x5nla8vux3gv05ci8ckz7z75hyrglh8qdzc58z086ppuzey8oz5oqat4c4dxlpp4wtz4qg9619ge74ibhds7906k5h82jj8cpoyc87jgepmjolkibgtdymz5vugq10i8clahd3ayie4sg2lbx18s1y36i75ci1twp4r4adrqfe7c4sfh682z8280lrz3ky9sjz4mfyiiaanho4yt80r53c6z6mm54leisotrbxgyekbncccr77m6z4eezvp60932krwj == \5\p\u\h\o\o\s\p\w\e\q\8\8\g\m\y\7\g\7\s\c\0\o\5\6\6\n\c\f\5\x\1\g\v\d\q\b\r\z\x\4\9\5\h\l\b\e\x\c\n\r\1\7\f\j\j\4\5\7\6\o\0\1\a\7\m\l\q\m\b\f\c\p\1\n\4\s\h\c\h\y\d\9\w\n\y\x\n\8\m\r\e\5\y\7\0\j\q\z\e\p\0\j\c\t\i\s\4\c\a\t\u\p\0\g\y\v\j\x\y\r\r\m\1\9\k\i\e\j\0\b\r\x\5\a\6\v\1\p\2\l\7\d\i\p\p\9\4\r\z\k\9\2\9\c\x\k\z\i\i\v\f\x\j\i\j\p\z\y\l\d\p\8\u\j\t\3\y\r\j\s\9\u\3\3\r\2\s\n\2\h\c\4\4\s\m\k\c\d\q\l\5\y\k\s\c\h\s\z\1\d\n\q\v\g\a\v\a\k\u\p\s\7\s\z\9\r\6\u\i\v\s\6\p\i\7\b\h\m\6\5\r\r\y\o\m\a\p\z\s\n\0\x\5\n\l\a\8\v\u\x\3\g\v\0\5\c\i\8\c\k\z\7\z\7\5\h\y\r\g\l\h\8\q\d\z\c\5\8\z\0\8\6\p\p\u\z\e\y\8\o\z\5\o\q\a\t\4\c\4\d\x\l\p\p\4\w\t\z\4\q\g\9\6\1\9\g\e\7\4\i\b\h\d\s\7\9\0\6\k\5\h\8\2\j\j\8\c\p\o\y\c\8\7\j\g\e\p\m\j\o\l\k\i\b\g\t\d\y\m\z\5\v\u\g\q\1\0\i\8\c\l\a\h\d\3\a\y\i\e\4\s\g\2\l\b\x\1\8\s\1\y\3\6\i\7\5\c\i\1\t\w\p\4\r\4\a\d\r\q\f\e\7\c\4\s\f\h\6\8\2\z\8\2\8\0\l\r\z\3\k\y\9\s\j\z\4\m\f\y\i\i\a\a\n\h\o\4\y\t\8\0\r\5\3\c\6\z\6\m\m\5\4\l\e\i\s\o\t\r\b\x\g\y\e\k\b\n\c\c\c\r\7\7\m\6\z\4\e\e\z\v\p\6\0\9\3\2\k\r\w\j ]] 00:05:45.274 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:45.274 12:40:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:45.274 [2024-11-15 12:40:53.826647] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:45.274 [2024-11-15 12:40:53.826727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60391 ] 00:05:45.533 [2024-11-15 12:40:53.959832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.533 [2024-11-15 12:40:53.987463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.533 [2024-11-15 12:40:54.013788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.533  [2024-11-15T12:40:54.203Z] Copying: 512/512 [B] (average 500 kBps) 00:05:45.533 00:05:45.533 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5puhoospweq88gmy7g7sc0o566ncf5x1gvdqbrzx495hlbexcnr17fjj4576o01a7mlqmbfcp1n4shchyd9wnyxn8mre5y70jqzep0jctis4catup0gyvjxyrrm19kiej0brx5a6v1p2l7dipp94rzk929cxkziivfxjijpzyldp8ujt3yrjs9u33r2sn2hc44smkcdql5ykschsz1dnqvgavakups7sz9r6uivs6pi7bhm65rryomapzsn0x5nla8vux3gv05ci8ckz7z75hyrglh8qdzc58z086ppuzey8oz5oqat4c4dxlpp4wtz4qg9619ge74ibhds7906k5h82jj8cpoyc87jgepmjolkibgtdymz5vugq10i8clahd3ayie4sg2lbx18s1y36i75ci1twp4r4adrqfe7c4sfh682z8280lrz3ky9sjz4mfyiiaanho4yt80r53c6z6mm54leisotrbxgyekbncccr77m6z4eezvp60932krwj == \5\p\u\h\o\o\s\p\w\e\q\8\8\g\m\y\7\g\7\s\c\0\o\5\6\6\n\c\f\5\x\1\g\v\d\q\b\r\z\x\4\9\5\h\l\b\e\x\c\n\r\1\7\f\j\j\4\5\7\6\o\0\1\a\7\m\l\q\m\b\f\c\p\1\n\4\s\h\c\h\y\d\9\w\n\y\x\n\8\m\r\e\5\y\7\0\j\q\z\e\p\0\j\c\t\i\s\4\c\a\t\u\p\0\g\y\v\j\x\y\r\r\m\1\9\k\i\e\j\0\b\r\x\5\a\6\v\1\p\2\l\7\d\i\p\p\9\4\r\z\k\9\2\9\c\x\k\z\i\i\v\f\x\j\i\j\p\z\y\l\d\p\8\u\j\t\3\y\r\j\s\9\u\3\3\r\2\s\n\2\h\c\4\4\s\m\k\c\d\q\l\5\y\k\s\c\h\s\z\1\d\n\q\v\g\a\v\a\k\u\p\s\7\s\z\9\r\6\u\i\v\s\6\p\i\7\b\h\m\6\5\r\r\y\o\m\a\p\z\s\n\0\x\5\n\l\a\8\v\u\x\3\g\v\0\5\c\i\8\c\k\z\7\z\7\5\h\y\r\g\l\h\8\q\d\z\c\5\8\z\0\8\6\p\p\u\z\e\y\8\o\z\5\o\q\a\t\4\c\4\d\x\l\p\p\4\w\t\z\4\q\g\9\6\1\9\g\e\7\4\i\b\h\d\s\7\9\0\6\k\5\h\8\2\j\j\8\c\p\o\y\c\8\7\j\g\e\p\m\j\o\l\k\i\b\g\t\d\y\m\z\5\v\u\g\q\1\0\i\8\c\l\a\h\d\3\a\y\i\e\4\s\g\2\l\b\x\1\8\s\1\y\3\6\i\7\5\c\i\1\t\w\p\4\r\4\a\d\r\q\f\e\7\c\4\s\f\h\6\8\2\z\8\2\8\0\l\r\z\3\k\y\9\s\j\z\4\m\f\y\i\i\a\a\n\h\o\4\y\t\8\0\r\5\3\c\6\z\6\m\m\5\4\l\e\i\s\o\t\r\b\x\g\y\e\k\b\n\c\c\c\r\7\7\m\6\z\4\e\e\z\v\p\6\0\9\3\2\k\r\w\j ]] 00:05:45.533 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:45.533 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:45.533 [2024-11-15 12:40:54.200460] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:45.533 [2024-11-15 12:40:54.200541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60393 ] 00:05:45.791 [2024-11-15 12:40:54.338426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.791 [2024-11-15 12:40:54.366482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.791 [2024-11-15 12:40:54.393200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.791  [2024-11-15T12:40:54.737Z] Copying: 512/512 [B] (average 166 kBps) 00:05:46.067 00:05:46.067 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5puhoospweq88gmy7g7sc0o566ncf5x1gvdqbrzx495hlbexcnr17fjj4576o01a7mlqmbfcp1n4shchyd9wnyxn8mre5y70jqzep0jctis4catup0gyvjxyrrm19kiej0brx5a6v1p2l7dipp94rzk929cxkziivfxjijpzyldp8ujt3yrjs9u33r2sn2hc44smkcdql5ykschsz1dnqvgavakups7sz9r6uivs6pi7bhm65rryomapzsn0x5nla8vux3gv05ci8ckz7z75hyrglh8qdzc58z086ppuzey8oz5oqat4c4dxlpp4wtz4qg9619ge74ibhds7906k5h82jj8cpoyc87jgepmjolkibgtdymz5vugq10i8clahd3ayie4sg2lbx18s1y36i75ci1twp4r4adrqfe7c4sfh682z8280lrz3ky9sjz4mfyiiaanho4yt80r53c6z6mm54leisotrbxgyekbncccr77m6z4eezvp60932krwj == \5\p\u\h\o\o\s\p\w\e\q\8\8\g\m\y\7\g\7\s\c\0\o\5\6\6\n\c\f\5\x\1\g\v\d\q\b\r\z\x\4\9\5\h\l\b\e\x\c\n\r\1\7\f\j\j\4\5\7\6\o\0\1\a\7\m\l\q\m\b\f\c\p\1\n\4\s\h\c\h\y\d\9\w\n\y\x\n\8\m\r\e\5\y\7\0\j\q\z\e\p\0\j\c\t\i\s\4\c\a\t\u\p\0\g\y\v\j\x\y\r\r\m\1\9\k\i\e\j\0\b\r\x\5\a\6\v\1\p\2\l\7\d\i\p\p\9\4\r\z\k\9\2\9\c\x\k\z\i\i\v\f\x\j\i\j\p\z\y\l\d\p\8\u\j\t\3\y\r\j\s\9\u\3\3\r\2\s\n\2\h\c\4\4\s\m\k\c\d\q\l\5\y\k\s\c\h\s\z\1\d\n\q\v\g\a\v\a\k\u\p\s\7\s\z\9\r\6\u\i\v\s\6\p\i\7\b\h\m\6\5\r\r\y\o\m\a\p\z\s\n\0\x\5\n\l\a\8\v\u\x\3\g\v\0\5\c\i\8\c\k\z\7\z\7\5\h\y\r\g\l\h\8\q\d\z\c\5\8\z\0\8\6\p\p\u\z\e\y\8\o\z\5\o\q\a\t\4\c\4\d\x\l\p\p\4\w\t\z\4\q\g\9\6\1\9\g\e\7\4\i\b\h\d\s\7\9\0\6\k\5\h\8\2\j\j\8\c\p\o\y\c\8\7\j\g\e\p\m\j\o\l\k\i\b\g\t\d\y\m\z\5\v\u\g\q\1\0\i\8\c\l\a\h\d\3\a\y\i\e\4\s\g\2\l\b\x\1\8\s\1\y\3\6\i\7\5\c\i\1\t\w\p\4\r\4\a\d\r\q\f\e\7\c\4\s\f\h\6\8\2\z\8\2\8\0\l\r\z\3\k\y\9\s\j\z\4\m\f\y\i\i\a\a\n\h\o\4\y\t\8\0\r\5\3\c\6\z\6\m\m\5\4\l\e\i\s\o\t\r\b\x\g\y\e\k\b\n\c\c\c\r\7\7\m\6\z\4\e\e\z\v\p\6\0\9\3\2\k\r\w\j ]] 00:05:46.067 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:46.067 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:46.067 [2024-11-15 12:40:54.608957] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:46.067 [2024-11-15 12:40:54.609050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60395 ] 00:05:46.338 [2024-11-15 12:40:54.754050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.338 [2024-11-15 12:40:54.788372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.338 [2024-11-15 12:40:54.823990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.338  [2024-11-15T12:40:55.008Z] Copying: 512/512 [B] (average 500 kBps) 00:05:46.338 00:05:46.338 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5puhoospweq88gmy7g7sc0o566ncf5x1gvdqbrzx495hlbexcnr17fjj4576o01a7mlqmbfcp1n4shchyd9wnyxn8mre5y70jqzep0jctis4catup0gyvjxyrrm19kiej0brx5a6v1p2l7dipp94rzk929cxkziivfxjijpzyldp8ujt3yrjs9u33r2sn2hc44smkcdql5ykschsz1dnqvgavakups7sz9r6uivs6pi7bhm65rryomapzsn0x5nla8vux3gv05ci8ckz7z75hyrglh8qdzc58z086ppuzey8oz5oqat4c4dxlpp4wtz4qg9619ge74ibhds7906k5h82jj8cpoyc87jgepmjolkibgtdymz5vugq10i8clahd3ayie4sg2lbx18s1y36i75ci1twp4r4adrqfe7c4sfh682z8280lrz3ky9sjz4mfyiiaanho4yt80r53c6z6mm54leisotrbxgyekbncccr77m6z4eezvp60932krwj == \5\p\u\h\o\o\s\p\w\e\q\8\8\g\m\y\7\g\7\s\c\0\o\5\6\6\n\c\f\5\x\1\g\v\d\q\b\r\z\x\4\9\5\h\l\b\e\x\c\n\r\1\7\f\j\j\4\5\7\6\o\0\1\a\7\m\l\q\m\b\f\c\p\1\n\4\s\h\c\h\y\d\9\w\n\y\x\n\8\m\r\e\5\y\7\0\j\q\z\e\p\0\j\c\t\i\s\4\c\a\t\u\p\0\g\y\v\j\x\y\r\r\m\1\9\k\i\e\j\0\b\r\x\5\a\6\v\1\p\2\l\7\d\i\p\p\9\4\r\z\k\9\2\9\c\x\k\z\i\i\v\f\x\j\i\j\p\z\y\l\d\p\8\u\j\t\3\y\r\j\s\9\u\3\3\r\2\s\n\2\h\c\4\4\s\m\k\c\d\q\l\5\y\k\s\c\h\s\z\1\d\n\q\v\g\a\v\a\k\u\p\s\7\s\z\9\r\6\u\i\v\s\6\p\i\7\b\h\m\6\5\r\r\y\o\m\a\p\z\s\n\0\x\5\n\l\a\8\v\u\x\3\g\v\0\5\c\i\8\c\k\z\7\z\7\5\h\y\r\g\l\h\8\q\d\z\c\5\8\z\0\8\6\p\p\u\z\e\y\8\o\z\5\o\q\a\t\4\c\4\d\x\l\p\p\4\w\t\z\4\q\g\9\6\1\9\g\e\7\4\i\b\h\d\s\7\9\0\6\k\5\h\8\2\j\j\8\c\p\o\y\c\8\7\j\g\e\p\m\j\o\l\k\i\b\g\t\d\y\m\z\5\v\u\g\q\1\0\i\8\c\l\a\h\d\3\a\y\i\e\4\s\g\2\l\b\x\1\8\s\1\y\3\6\i\7\5\c\i\1\t\w\p\4\r\4\a\d\r\q\f\e\7\c\4\s\f\h\6\8\2\z\8\2\8\0\l\r\z\3\k\y\9\s\j\z\4\m\f\y\i\i\a\a\n\h\o\4\y\t\8\0\r\5\3\c\6\z\6\m\m\5\4\l\e\i\s\o\t\r\b\x\g\y\e\k\b\n\c\c\c\r\7\7\m\6\z\4\e\e\z\v\p\6\0\9\3\2\k\r\w\j ]] 00:05:46.338 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:46.338 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:46.338 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:46.338 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:46.338 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:46.338 12:40:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:46.597 [2024-11-15 12:40:55.034167] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:46.597 [2024-11-15 12:40:55.034261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60408 ] 00:05:46.597 [2024-11-15 12:40:55.177877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.597 [2024-11-15 12:40:55.209702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.597 [2024-11-15 12:40:55.242988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.597  [2024-11-15T12:40:55.525Z] Copying: 512/512 [B] (average 500 kBps) 00:05:46.855 00:05:46.855 12:40:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l9amth9129zrqgercnuzlftqwefb4vfggvxfbb0na7it5gmkoew0zp9b1lp67hcnp1phtlvmg4qcxlkueqws3a6df5fgrvpeotv6cqjt6d9oqwtencuxa22mo135na5cdiay8rqw2h77w8svfu31yj5braj2kvw6egyj1c83etipsrn22v4sxkijfebhy3im3e6wb6o6dxjhy279u8xs9zhcjj3yfig3y9a4842b4ltiukdz2s3xmeqsmju3388zvagwn82e2h7to0eatke2o5bg15udsugm7zjb0kp1bs2ln6agtqi10u4k6fu9rld8l3t1n80inlk4nmc6vpr2ywgg5s8eej6mgfb0y4y5iga3b2dj8zwzzydh3upnpmgq880kymnr3por00oouwkjm5k4am55e3bd96l1sjzc64dcfettixd9btfl2lb857r0f7rsh1b844qik94ndmh8zax4vri221c2ga5aytn342emrxrgkis0vj9kattuod7q == \l\9\a\m\t\h\9\1\2\9\z\r\q\g\e\r\c\n\u\z\l\f\t\q\w\e\f\b\4\v\f\g\g\v\x\f\b\b\0\n\a\7\i\t\5\g\m\k\o\e\w\0\z\p\9\b\1\l\p\6\7\h\c\n\p\1\p\h\t\l\v\m\g\4\q\c\x\l\k\u\e\q\w\s\3\a\6\d\f\5\f\g\r\v\p\e\o\t\v\6\c\q\j\t\6\d\9\o\q\w\t\e\n\c\u\x\a\2\2\m\o\1\3\5\n\a\5\c\d\i\a\y\8\r\q\w\2\h\7\7\w\8\s\v\f\u\3\1\y\j\5\b\r\a\j\2\k\v\w\6\e\g\y\j\1\c\8\3\e\t\i\p\s\r\n\2\2\v\4\s\x\k\i\j\f\e\b\h\y\3\i\m\3\e\6\w\b\6\o\6\d\x\j\h\y\2\7\9\u\8\x\s\9\z\h\c\j\j\3\y\f\i\g\3\y\9\a\4\8\4\2\b\4\l\t\i\u\k\d\z\2\s\3\x\m\e\q\s\m\j\u\3\3\8\8\z\v\a\g\w\n\8\2\e\2\h\7\t\o\0\e\a\t\k\e\2\o\5\b\g\1\5\u\d\s\u\g\m\7\z\j\b\0\k\p\1\b\s\2\l\n\6\a\g\t\q\i\1\0\u\4\k\6\f\u\9\r\l\d\8\l\3\t\1\n\8\0\i\n\l\k\4\n\m\c\6\v\p\r\2\y\w\g\g\5\s\8\e\e\j\6\m\g\f\b\0\y\4\y\5\i\g\a\3\b\2\d\j\8\z\w\z\z\y\d\h\3\u\p\n\p\m\g\q\8\8\0\k\y\m\n\r\3\p\o\r\0\0\o\o\u\w\k\j\m\5\k\4\a\m\5\5\e\3\b\d\9\6\l\1\s\j\z\c\6\4\d\c\f\e\t\t\i\x\d\9\b\t\f\l\2\l\b\8\5\7\r\0\f\7\r\s\h\1\b\8\4\4\q\i\k\9\4\n\d\m\h\8\z\a\x\4\v\r\i\2\2\1\c\2\g\a\5\a\y\t\n\3\4\2\e\m\r\x\r\g\k\i\s\0\v\j\9\k\a\t\t\u\o\d\7\q ]] 00:05:46.855 12:40:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:46.855 12:40:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:46.855 [2024-11-15 12:40:55.452200] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:46.855 [2024-11-15 12:40:55.452295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60412 ] 00:05:47.113 [2024-11-15 12:40:55.597244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.113 [2024-11-15 12:40:55.626475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.113 [2024-11-15 12:40:55.656253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.113  [2024-11-15T12:40:56.043Z] Copying: 512/512 [B] (average 500 kBps) 00:05:47.373 00:05:47.373 12:40:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l9amth9129zrqgercnuzlftqwefb4vfggvxfbb0na7it5gmkoew0zp9b1lp67hcnp1phtlvmg4qcxlkueqws3a6df5fgrvpeotv6cqjt6d9oqwtencuxa22mo135na5cdiay8rqw2h77w8svfu31yj5braj2kvw6egyj1c83etipsrn22v4sxkijfebhy3im3e6wb6o6dxjhy279u8xs9zhcjj3yfig3y9a4842b4ltiukdz2s3xmeqsmju3388zvagwn82e2h7to0eatke2o5bg15udsugm7zjb0kp1bs2ln6agtqi10u4k6fu9rld8l3t1n80inlk4nmc6vpr2ywgg5s8eej6mgfb0y4y5iga3b2dj8zwzzydh3upnpmgq880kymnr3por00oouwkjm5k4am55e3bd96l1sjzc64dcfettixd9btfl2lb857r0f7rsh1b844qik94ndmh8zax4vri221c2ga5aytn342emrxrgkis0vj9kattuod7q == \l\9\a\m\t\h\9\1\2\9\z\r\q\g\e\r\c\n\u\z\l\f\t\q\w\e\f\b\4\v\f\g\g\v\x\f\b\b\0\n\a\7\i\t\5\g\m\k\o\e\w\0\z\p\9\b\1\l\p\6\7\h\c\n\p\1\p\h\t\l\v\m\g\4\q\c\x\l\k\u\e\q\w\s\3\a\6\d\f\5\f\g\r\v\p\e\o\t\v\6\c\q\j\t\6\d\9\o\q\w\t\e\n\c\u\x\a\2\2\m\o\1\3\5\n\a\5\c\d\i\a\y\8\r\q\w\2\h\7\7\w\8\s\v\f\u\3\1\y\j\5\b\r\a\j\2\k\v\w\6\e\g\y\j\1\c\8\3\e\t\i\p\s\r\n\2\2\v\4\s\x\k\i\j\f\e\b\h\y\3\i\m\3\e\6\w\b\6\o\6\d\x\j\h\y\2\7\9\u\8\x\s\9\z\h\c\j\j\3\y\f\i\g\3\y\9\a\4\8\4\2\b\4\l\t\i\u\k\d\z\2\s\3\x\m\e\q\s\m\j\u\3\3\8\8\z\v\a\g\w\n\8\2\e\2\h\7\t\o\0\e\a\t\k\e\2\o\5\b\g\1\5\u\d\s\u\g\m\7\z\j\b\0\k\p\1\b\s\2\l\n\6\a\g\t\q\i\1\0\u\4\k\6\f\u\9\r\l\d\8\l\3\t\1\n\8\0\i\n\l\k\4\n\m\c\6\v\p\r\2\y\w\g\g\5\s\8\e\e\j\6\m\g\f\b\0\y\4\y\5\i\g\a\3\b\2\d\j\8\z\w\z\z\y\d\h\3\u\p\n\p\m\g\q\8\8\0\k\y\m\n\r\3\p\o\r\0\0\o\o\u\w\k\j\m\5\k\4\a\m\5\5\e\3\b\d\9\6\l\1\s\j\z\c\6\4\d\c\f\e\t\t\i\x\d\9\b\t\f\l\2\l\b\8\5\7\r\0\f\7\r\s\h\1\b\8\4\4\q\i\k\9\4\n\d\m\h\8\z\a\x\4\v\r\i\2\2\1\c\2\g\a\5\a\y\t\n\3\4\2\e\m\r\x\r\g\k\i\s\0\v\j\9\k\a\t\t\u\o\d\7\q ]] 00:05:47.373 12:40:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:47.373 12:40:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:47.373 [2024-11-15 12:40:55.855363] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:47.373 [2024-11-15 12:40:55.855437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60420 ] 00:05:47.373 [2024-11-15 12:40:55.994051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.373 [2024-11-15 12:40:56.021343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.632 [2024-11-15 12:40:56.048332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.632  [2024-11-15T12:40:56.302Z] Copying: 512/512 [B] (average 500 kBps) 00:05:47.632 00:05:47.632 12:40:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l9amth9129zrqgercnuzlftqwefb4vfggvxfbb0na7it5gmkoew0zp9b1lp67hcnp1phtlvmg4qcxlkueqws3a6df5fgrvpeotv6cqjt6d9oqwtencuxa22mo135na5cdiay8rqw2h77w8svfu31yj5braj2kvw6egyj1c83etipsrn22v4sxkijfebhy3im3e6wb6o6dxjhy279u8xs9zhcjj3yfig3y9a4842b4ltiukdz2s3xmeqsmju3388zvagwn82e2h7to0eatke2o5bg15udsugm7zjb0kp1bs2ln6agtqi10u4k6fu9rld8l3t1n80inlk4nmc6vpr2ywgg5s8eej6mgfb0y4y5iga3b2dj8zwzzydh3upnpmgq880kymnr3por00oouwkjm5k4am55e3bd96l1sjzc64dcfettixd9btfl2lb857r0f7rsh1b844qik94ndmh8zax4vri221c2ga5aytn342emrxrgkis0vj9kattuod7q == \l\9\a\m\t\h\9\1\2\9\z\r\q\g\e\r\c\n\u\z\l\f\t\q\w\e\f\b\4\v\f\g\g\v\x\f\b\b\0\n\a\7\i\t\5\g\m\k\o\e\w\0\z\p\9\b\1\l\p\6\7\h\c\n\p\1\p\h\t\l\v\m\g\4\q\c\x\l\k\u\e\q\w\s\3\a\6\d\f\5\f\g\r\v\p\e\o\t\v\6\c\q\j\t\6\d\9\o\q\w\t\e\n\c\u\x\a\2\2\m\o\1\3\5\n\a\5\c\d\i\a\y\8\r\q\w\2\h\7\7\w\8\s\v\f\u\3\1\y\j\5\b\r\a\j\2\k\v\w\6\e\g\y\j\1\c\8\3\e\t\i\p\s\r\n\2\2\v\4\s\x\k\i\j\f\e\b\h\y\3\i\m\3\e\6\w\b\6\o\6\d\x\j\h\y\2\7\9\u\8\x\s\9\z\h\c\j\j\3\y\f\i\g\3\y\9\a\4\8\4\2\b\4\l\t\i\u\k\d\z\2\s\3\x\m\e\q\s\m\j\u\3\3\8\8\z\v\a\g\w\n\8\2\e\2\h\7\t\o\0\e\a\t\k\e\2\o\5\b\g\1\5\u\d\s\u\g\m\7\z\j\b\0\k\p\1\b\s\2\l\n\6\a\g\t\q\i\1\0\u\4\k\6\f\u\9\r\l\d\8\l\3\t\1\n\8\0\i\n\l\k\4\n\m\c\6\v\p\r\2\y\w\g\g\5\s\8\e\e\j\6\m\g\f\b\0\y\4\y\5\i\g\a\3\b\2\d\j\8\z\w\z\z\y\d\h\3\u\p\n\p\m\g\q\8\8\0\k\y\m\n\r\3\p\o\r\0\0\o\o\u\w\k\j\m\5\k\4\a\m\5\5\e\3\b\d\9\6\l\1\s\j\z\c\6\4\d\c\f\e\t\t\i\x\d\9\b\t\f\l\2\l\b\8\5\7\r\0\f\7\r\s\h\1\b\8\4\4\q\i\k\9\4\n\d\m\h\8\z\a\x\4\v\r\i\2\2\1\c\2\g\a\5\a\y\t\n\3\4\2\e\m\r\x\r\g\k\i\s\0\v\j\9\k\a\t\t\u\o\d\7\q ]] 00:05:47.632 12:40:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:47.632 12:40:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:47.632 [2024-11-15 12:40:56.233225] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:47.632 [2024-11-15 12:40:56.233309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:05:47.890 [2024-11-15 12:40:56.369699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.890 [2024-11-15 12:40:56.397508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.890 [2024-11-15 12:40:56.424110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.890  [2024-11-15T12:40:56.561Z] Copying: 512/512 [B] (average 500 kBps) 00:05:47.891 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l9amth9129zrqgercnuzlftqwefb4vfggvxfbb0na7it5gmkoew0zp9b1lp67hcnp1phtlvmg4qcxlkueqws3a6df5fgrvpeotv6cqjt6d9oqwtencuxa22mo135na5cdiay8rqw2h77w8svfu31yj5braj2kvw6egyj1c83etipsrn22v4sxkijfebhy3im3e6wb6o6dxjhy279u8xs9zhcjj3yfig3y9a4842b4ltiukdz2s3xmeqsmju3388zvagwn82e2h7to0eatke2o5bg15udsugm7zjb0kp1bs2ln6agtqi10u4k6fu9rld8l3t1n80inlk4nmc6vpr2ywgg5s8eej6mgfb0y4y5iga3b2dj8zwzzydh3upnpmgq880kymnr3por00oouwkjm5k4am55e3bd96l1sjzc64dcfettixd9btfl2lb857r0f7rsh1b844qik94ndmh8zax4vri221c2ga5aytn342emrxrgkis0vj9kattuod7q == \l\9\a\m\t\h\9\1\2\9\z\r\q\g\e\r\c\n\u\z\l\f\t\q\w\e\f\b\4\v\f\g\g\v\x\f\b\b\0\n\a\7\i\t\5\g\m\k\o\e\w\0\z\p\9\b\1\l\p\6\7\h\c\n\p\1\p\h\t\l\v\m\g\4\q\c\x\l\k\u\e\q\w\s\3\a\6\d\f\5\f\g\r\v\p\e\o\t\v\6\c\q\j\t\6\d\9\o\q\w\t\e\n\c\u\x\a\2\2\m\o\1\3\5\n\a\5\c\d\i\a\y\8\r\q\w\2\h\7\7\w\8\s\v\f\u\3\1\y\j\5\b\r\a\j\2\k\v\w\6\e\g\y\j\1\c\8\3\e\t\i\p\s\r\n\2\2\v\4\s\x\k\i\j\f\e\b\h\y\3\i\m\3\e\6\w\b\6\o\6\d\x\j\h\y\2\7\9\u\8\x\s\9\z\h\c\j\j\3\y\f\i\g\3\y\9\a\4\8\4\2\b\4\l\t\i\u\k\d\z\2\s\3\x\m\e\q\s\m\j\u\3\3\8\8\z\v\a\g\w\n\8\2\e\2\h\7\t\o\0\e\a\t\k\e\2\o\5\b\g\1\5\u\d\s\u\g\m\7\z\j\b\0\k\p\1\b\s\2\l\n\6\a\g\t\q\i\1\0\u\4\k\6\f\u\9\r\l\d\8\l\3\t\1\n\8\0\i\n\l\k\4\n\m\c\6\v\p\r\2\y\w\g\g\5\s\8\e\e\j\6\m\g\f\b\0\y\4\y\5\i\g\a\3\b\2\d\j\8\z\w\z\z\y\d\h\3\u\p\n\p\m\g\q\8\8\0\k\y\m\n\r\3\p\o\r\0\0\o\o\u\w\k\j\m\5\k\4\a\m\5\5\e\3\b\d\9\6\l\1\s\j\z\c\6\4\d\c\f\e\t\t\i\x\d\9\b\t\f\l\2\l\b\8\5\7\r\0\f\7\r\s\h\1\b\8\4\4\q\i\k\9\4\n\d\m\h\8\z\a\x\4\v\r\i\2\2\1\c\2\g\a\5\a\y\t\n\3\4\2\e\m\r\x\r\g\k\i\s\0\v\j\9\k\a\t\t\u\o\d\7\q ]] 00:05:48.150 00:05:48.150 real 0m3.185s 00:05:48.150 user 0m1.557s 00:05:48.150 sys 0m0.671s 00:05:48.150 ************************************ 00:05:48.150 END TEST dd_flags_misc_forced_aio 00:05:48.150 ************************************ 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:48.150 00:05:48.150 real 0m15.657s 00:05:48.150 user 0m6.686s 00:05:48.150 sys 0m4.302s 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.150 ************************************ 00:05:48.150 END TEST spdk_dd_posix 00:05:48.150 ************************************ 00:05:48.150 12:40:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:48.150 12:40:56 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:48.150 12:40:56 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.150 12:40:56 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.150 12:40:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:48.150 ************************************ 00:05:48.150 START TEST spdk_dd_malloc 00:05:48.150 ************************************ 00:05:48.150 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:48.150 * Looking for test storage... 00:05:48.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:48.150 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.150 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.150 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.410 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.410 --rc genhtml_branch_coverage=1 00:05:48.410 --rc genhtml_function_coverage=1 00:05:48.410 --rc genhtml_legend=1 00:05:48.411 --rc geninfo_all_blocks=1 00:05:48.411 --rc geninfo_unexecuted_blocks=1 00:05:48.411 00:05:48.411 ' 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.411 --rc genhtml_branch_coverage=1 00:05:48.411 --rc genhtml_function_coverage=1 00:05:48.411 --rc genhtml_legend=1 00:05:48.411 --rc geninfo_all_blocks=1 00:05:48.411 --rc geninfo_unexecuted_blocks=1 00:05:48.411 00:05:48.411 ' 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.411 --rc genhtml_branch_coverage=1 00:05:48.411 --rc genhtml_function_coverage=1 00:05:48.411 --rc genhtml_legend=1 00:05:48.411 --rc geninfo_all_blocks=1 00:05:48.411 --rc geninfo_unexecuted_blocks=1 00:05:48.411 00:05:48.411 ' 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.411 --rc genhtml_branch_coverage=1 00:05:48.411 --rc genhtml_function_coverage=1 00:05:48.411 --rc genhtml_legend=1 00:05:48.411 --rc geninfo_all_blocks=1 00:05:48.411 --rc geninfo_unexecuted_blocks=1 00:05:48.411 00:05:48.411 ' 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:48.411 ************************************ 00:05:48.411 START TEST dd_malloc_copy 00:05:48.411 ************************************ 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:48.411 12:40:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:48.411 { 00:05:48.411 "subsystems": [ 00:05:48.411 { 00:05:48.411 "subsystem": "bdev", 00:05:48.411 "config": [ 00:05:48.411 { 00:05:48.411 "params": { 00:05:48.411 "block_size": 512, 00:05:48.411 "num_blocks": 1048576, 00:05:48.411 "name": "malloc0" 00:05:48.411 }, 00:05:48.411 "method": "bdev_malloc_create" 00:05:48.411 }, 00:05:48.411 { 00:05:48.411 "params": { 00:05:48.411 "block_size": 512, 00:05:48.411 "num_blocks": 1048576, 00:05:48.411 "name": "malloc1" 00:05:48.411 }, 00:05:48.411 "method": "bdev_malloc_create" 00:05:48.411 }, 00:05:48.411 { 00:05:48.411 "method": "bdev_wait_for_examine" 00:05:48.411 } 00:05:48.411 ] 00:05:48.411 } 00:05:48.411 ] 00:05:48.411 } 00:05:48.411 [2024-11-15 12:40:56.921426] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:48.411 [2024-11-15 12:40:56.921518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60504 ] 00:05:48.411 [2024-11-15 12:40:57.065506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.670 [2024-11-15 12:40:57.095174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.670 [2024-11-15 12:40:57.124220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.047  [2024-11-15T12:40:59.654Z] Copying: 253/512 [MB] (253 MBps) [2024-11-15T12:40:59.654Z] Copying: 508/512 [MB] (254 MBps) [2024-11-15T12:40:59.654Z] Copying: 512/512 [MB] (average 253 MBps) 00:05:50.984 00:05:50.984 12:40:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:05:50.984 12:40:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:05:50.984 12:40:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:50.984 12:40:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:51.242 [2024-11-15 12:40:59.669195] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:51.242 [2024-11-15 12:40:59.669297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60541 ] 00:05:51.242 { 00:05:51.242 "subsystems": [ 00:05:51.242 { 00:05:51.242 "subsystem": "bdev", 00:05:51.242 "config": [ 00:05:51.242 { 00:05:51.242 "params": { 00:05:51.242 "block_size": 512, 00:05:51.242 "num_blocks": 1048576, 00:05:51.242 "name": "malloc0" 00:05:51.242 }, 00:05:51.242 "method": "bdev_malloc_create" 00:05:51.242 }, 00:05:51.242 { 00:05:51.242 "params": { 00:05:51.242 "block_size": 512, 00:05:51.242 "num_blocks": 1048576, 00:05:51.242 "name": "malloc1" 00:05:51.242 }, 00:05:51.242 "method": "bdev_malloc_create" 00:05:51.242 }, 00:05:51.242 { 00:05:51.242 "method": "bdev_wait_for_examine" 00:05:51.242 } 00:05:51.242 ] 00:05:51.242 } 00:05:51.242 ] 00:05:51.242 } 00:05:51.242 [2024-11-15 12:40:59.820562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.242 [2024-11-15 12:40:59.862231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.242 [2024-11-15 12:40:59.899360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.619  [2024-11-15T12:41:02.224Z] Copying: 245/512 [MB] (245 MBps) [2024-11-15T12:41:02.224Z] Copying: 500/512 [MB] (254 MBps) [2024-11-15T12:41:02.482Z] Copying: 512/512 [MB] (average 249 MBps) 00:05:53.812 00:05:54.070 00:05:54.070 real 0m5.627s 00:05:54.070 user 0m4.979s 00:05:54.070 sys 0m0.503s 00:05:54.070 12:41:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.070 ************************************ 00:05:54.070 END TEST dd_malloc_copy 00:05:54.070 ************************************ 00:05:54.070 12:41:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:54.070 00:05:54.070 real 0m5.867s 00:05:54.070 user 0m5.118s 00:05:54.070 sys 0m0.611s 00:05:54.070 12:41:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.070 12:41:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:54.070 ************************************ 00:05:54.070 END TEST spdk_dd_malloc 00:05:54.070 ************************************ 00:05:54.070 12:41:02 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:05:54.070 12:41:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:54.070 12:41:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.070 12:41:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:54.070 ************************************ 00:05:54.070 START TEST spdk_dd_bdev_to_bdev 00:05:54.070 ************************************ 00:05:54.070 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:05:54.070 * Looking for test storage... 00:05:54.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:54.070 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.070 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.070 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.330 --rc genhtml_branch_coverage=1 00:05:54.330 --rc genhtml_function_coverage=1 00:05:54.330 --rc genhtml_legend=1 00:05:54.330 --rc geninfo_all_blocks=1 00:05:54.330 --rc geninfo_unexecuted_blocks=1 00:05:54.330 00:05:54.330 ' 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.330 --rc genhtml_branch_coverage=1 00:05:54.330 --rc genhtml_function_coverage=1 00:05:54.330 --rc genhtml_legend=1 00:05:54.330 --rc geninfo_all_blocks=1 00:05:54.330 --rc geninfo_unexecuted_blocks=1 00:05:54.330 00:05:54.330 ' 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.330 --rc genhtml_branch_coverage=1 00:05:54.330 --rc genhtml_function_coverage=1 00:05:54.330 --rc genhtml_legend=1 00:05:54.330 --rc geninfo_all_blocks=1 00:05:54.330 --rc geninfo_unexecuted_blocks=1 00:05:54.330 00:05:54.330 ' 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.330 --rc genhtml_branch_coverage=1 00:05:54.330 --rc genhtml_function_coverage=1 00:05:54.330 --rc genhtml_legend=1 00:05:54.330 --rc geninfo_all_blocks=1 00:05:54.330 --rc geninfo_unexecuted_blocks=1 00:05:54.330 00:05:54.330 ' 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:05:54.330 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:54.331 ************************************ 00:05:54.331 START TEST dd_inflate_file 00:05:54.331 ************************************ 00:05:54.331 12:41:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:05:54.331 [2024-11-15 12:41:02.828037] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:54.331 [2024-11-15 12:41:02.828723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60648 ] 00:05:54.331 [2024-11-15 12:41:02.964466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.331 [2024-11-15 12:41:02.992031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.589 [2024-11-15 12:41:03.019326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.589  [2024-11-15T12:41:03.259Z] Copying: 64/64 [MB] (average 1600 MBps) 00:05:54.589 00:05:54.589 00:05:54.589 real 0m0.422s 00:05:54.589 user 0m0.229s 00:05:54.589 sys 0m0.212s 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.589 ************************************ 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:05:54.589 END TEST dd_inflate_file 00:05:54.589 ************************************ 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:54.589 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:54.589 ************************************ 00:05:54.589 START TEST dd_copy_to_out_bdev 00:05:54.589 ************************************ 00:05:54.590 12:41:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:05:54.848 [2024-11-15 12:41:03.304797] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:54.848 [2024-11-15 12:41:03.304898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60686 ] 00:05:54.848 { 00:05:54.848 "subsystems": [ 00:05:54.848 { 00:05:54.848 "subsystem": "bdev", 00:05:54.848 "config": [ 00:05:54.848 { 00:05:54.848 "params": { 00:05:54.848 "trtype": "pcie", 00:05:54.848 "traddr": "0000:00:10.0", 00:05:54.848 "name": "Nvme0" 00:05:54.848 }, 00:05:54.848 "method": "bdev_nvme_attach_controller" 00:05:54.848 }, 00:05:54.848 { 00:05:54.848 "params": { 00:05:54.848 "trtype": "pcie", 00:05:54.848 "traddr": "0000:00:11.0", 00:05:54.848 "name": "Nvme1" 00:05:54.848 }, 00:05:54.848 "method": "bdev_nvme_attach_controller" 00:05:54.848 }, 00:05:54.848 { 00:05:54.848 "method": "bdev_wait_for_examine" 00:05:54.848 } 00:05:54.848 ] 00:05:54.848 } 00:05:54.848 ] 00:05:54.848 } 00:05:54.848 [2024-11-15 12:41:03.449253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.848 [2024-11-15 12:41:03.479219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.848 [2024-11-15 12:41:03.509625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.225  [2024-11-15T12:41:05.154Z] Copying: 51/64 [MB] (51 MBps) [2024-11-15T12:41:05.154Z] Copying: 64/64 [MB] (average 51 MBps) 00:05:56.484 00:05:56.484 00:05:56.484 real 0m1.828s 00:05:56.484 user 0m1.648s 00:05:56.484 sys 0m1.495s 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.484 ************************************ 00:05:56.484 END TEST dd_copy_to_out_bdev 00:05:56.484 ************************************ 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 ************************************ 00:05:56.484 START TEST dd_offset_magic 00:05:56.484 ************************************ 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:05:56.484 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:56.485 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:56.744 [2024-11-15 12:41:05.185310] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:56.744 [2024-11-15 12:41:05.185402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60727 ] 00:05:56.744 { 00:05:56.744 "subsystems": [ 00:05:56.744 { 00:05:56.744 "subsystem": "bdev", 00:05:56.744 "config": [ 00:05:56.744 { 00:05:56.744 "params": { 00:05:56.744 "trtype": "pcie", 00:05:56.744 "traddr": "0000:00:10.0", 00:05:56.744 "name": "Nvme0" 00:05:56.744 }, 00:05:56.744 "method": "bdev_nvme_attach_controller" 00:05:56.744 }, 00:05:56.744 { 00:05:56.744 "params": { 00:05:56.744 "trtype": "pcie", 00:05:56.744 "traddr": "0000:00:11.0", 00:05:56.744 "name": "Nvme1" 00:05:56.744 }, 00:05:56.744 "method": "bdev_nvme_attach_controller" 00:05:56.744 }, 00:05:56.744 { 00:05:56.744 "method": "bdev_wait_for_examine" 00:05:56.744 } 00:05:56.744 ] 00:05:56.744 } 00:05:56.744 ] 00:05:56.744 } 00:05:56.744 [2024-11-15 12:41:05.322910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.744 [2024-11-15 12:41:05.350403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.744 [2024-11-15 12:41:05.377160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.003  [2024-11-15T12:41:05.932Z] Copying: 65/65 [MB] (average 955 MBps) 00:05:57.262 00:05:57.262 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:05:57.262 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:05:57.262 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:57.262 12:41:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:57.262 [2024-11-15 12:41:05.819872] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:57.262 [2024-11-15 12:41:05.819992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60741 ] 00:05:57.262 { 00:05:57.262 "subsystems": [ 00:05:57.262 { 00:05:57.262 "subsystem": "bdev", 00:05:57.262 "config": [ 00:05:57.262 { 00:05:57.262 "params": { 00:05:57.262 "trtype": "pcie", 00:05:57.262 "traddr": "0000:00:10.0", 00:05:57.262 "name": "Nvme0" 00:05:57.262 }, 00:05:57.262 "method": "bdev_nvme_attach_controller" 00:05:57.262 }, 00:05:57.262 { 00:05:57.262 "params": { 00:05:57.262 "trtype": "pcie", 00:05:57.262 "traddr": "0000:00:11.0", 00:05:57.262 "name": "Nvme1" 00:05:57.262 }, 00:05:57.262 "method": "bdev_nvme_attach_controller" 00:05:57.262 }, 00:05:57.262 { 00:05:57.262 "method": "bdev_wait_for_examine" 00:05:57.262 } 00:05:57.262 ] 00:05:57.262 } 00:05:57.262 ] 00:05:57.262 } 00:05:57.521 [2024-11-15 12:41:05.964789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.521 [2024-11-15 12:41:05.992777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.521 [2024-11-15 12:41:06.020279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.521  [2024-11-15T12:41:06.449Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:57.779 00:05:57.779 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:05:57.779 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:05:57.779 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:05:57.779 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:05:57.779 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:05:57.779 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:57.779 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:57.779 [2024-11-15 12:41:06.342324] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:57.779 [2024-11-15 12:41:06.342451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60758 ] 00:05:57.779 { 00:05:57.779 "subsystems": [ 00:05:57.779 { 00:05:57.779 "subsystem": "bdev", 00:05:57.779 "config": [ 00:05:57.779 { 00:05:57.779 "params": { 00:05:57.779 "trtype": "pcie", 00:05:57.779 "traddr": "0000:00:10.0", 00:05:57.779 "name": "Nvme0" 00:05:57.779 }, 00:05:57.779 "method": "bdev_nvme_attach_controller" 00:05:57.779 }, 00:05:57.779 { 00:05:57.779 "params": { 00:05:57.779 "trtype": "pcie", 00:05:57.779 "traddr": "0000:00:11.0", 00:05:57.779 "name": "Nvme1" 00:05:57.779 }, 00:05:57.779 "method": "bdev_nvme_attach_controller" 00:05:57.779 }, 00:05:57.779 { 00:05:57.779 "method": "bdev_wait_for_examine" 00:05:57.779 } 00:05:57.779 ] 00:05:57.779 } 00:05:57.779 ] 00:05:57.779 } 00:05:58.038 [2024-11-15 12:41:06.481692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.038 [2024-11-15 12:41:06.509267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.038 [2024-11-15 12:41:06.536120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.297  [2024-11-15T12:41:06.967Z] Copying: 65/65 [MB] (average 1031 MBps) 00:05:58.297 00:05:58.297 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:05:58.297 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:05:58.297 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:58.297 12:41:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:58.556 [2024-11-15 12:41:06.969115] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:58.556 [2024-11-15 12:41:06.969225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60772 ] 00:05:58.556 { 00:05:58.556 "subsystems": [ 00:05:58.556 { 00:05:58.556 "subsystem": "bdev", 00:05:58.556 "config": [ 00:05:58.556 { 00:05:58.556 "params": { 00:05:58.556 "trtype": "pcie", 00:05:58.556 "traddr": "0000:00:10.0", 00:05:58.556 "name": "Nvme0" 00:05:58.556 }, 00:05:58.556 "method": "bdev_nvme_attach_controller" 00:05:58.556 }, 00:05:58.556 { 00:05:58.556 "params": { 00:05:58.556 "trtype": "pcie", 00:05:58.556 "traddr": "0000:00:11.0", 00:05:58.556 "name": "Nvme1" 00:05:58.556 }, 00:05:58.556 "method": "bdev_nvme_attach_controller" 00:05:58.556 }, 00:05:58.556 { 00:05:58.556 "method": "bdev_wait_for_examine" 00:05:58.556 } 00:05:58.556 ] 00:05:58.556 } 00:05:58.556 ] 00:05:58.556 } 00:05:58.556 [2024-11-15 12:41:07.113357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.556 [2024-11-15 12:41:07.144894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.556 [2024-11-15 12:41:07.174237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.815  [2024-11-15T12:41:07.485Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:58.815 00:05:58.815 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:05:58.815 ************************************ 00:05:58.815 END TEST dd_offset_magic 00:05:58.815 ************************************ 00:05:58.815 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:05:58.815 00:05:58.815 real 0m2.328s 00:05:58.815 user 0m1.741s 00:05:58.815 sys 0m0.566s 00:05:58.815 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.815 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:59.074 12:41:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:59.074 [2024-11-15 12:41:07.556323] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:59.074 [2024-11-15 12:41:07.556418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:05:59.074 { 00:05:59.074 "subsystems": [ 00:05:59.074 { 00:05:59.074 "subsystem": "bdev", 00:05:59.074 "config": [ 00:05:59.074 { 00:05:59.074 "params": { 00:05:59.074 "trtype": "pcie", 00:05:59.074 "traddr": "0000:00:10.0", 00:05:59.074 "name": "Nvme0" 00:05:59.074 }, 00:05:59.074 "method": "bdev_nvme_attach_controller" 00:05:59.074 }, 00:05:59.074 { 00:05:59.074 "params": { 00:05:59.074 "trtype": "pcie", 00:05:59.074 "traddr": "0000:00:11.0", 00:05:59.074 "name": "Nvme1" 00:05:59.074 }, 00:05:59.074 "method": "bdev_nvme_attach_controller" 00:05:59.074 }, 00:05:59.074 { 00:05:59.074 "method": "bdev_wait_for_examine" 00:05:59.074 } 00:05:59.074 ] 00:05:59.074 } 00:05:59.074 ] 00:05:59.074 } 00:05:59.074 [2024-11-15 12:41:07.701005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.074 [2024-11-15 12:41:07.729829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.333 [2024-11-15 12:41:07.758979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.333  [2024-11-15T12:41:08.262Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:05:59.592 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:59.592 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:59.592 [2024-11-15 12:41:08.089697] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:05:59.592 [2024-11-15 12:41:08.089795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60827 ] 00:05:59.592 { 00:05:59.592 "subsystems": [ 00:05:59.592 { 00:05:59.592 "subsystem": "bdev", 00:05:59.592 "config": [ 00:05:59.592 { 00:05:59.592 "params": { 00:05:59.592 "trtype": "pcie", 00:05:59.592 "traddr": "0000:00:10.0", 00:05:59.592 "name": "Nvme0" 00:05:59.592 }, 00:05:59.592 "method": "bdev_nvme_attach_controller" 00:05:59.592 }, 00:05:59.592 { 00:05:59.592 "params": { 00:05:59.592 "trtype": "pcie", 00:05:59.592 "traddr": "0000:00:11.0", 00:05:59.592 "name": "Nvme1" 00:05:59.592 }, 00:05:59.592 "method": "bdev_nvme_attach_controller" 00:05:59.592 }, 00:05:59.592 { 00:05:59.592 "method": "bdev_wait_for_examine" 00:05:59.592 } 00:05:59.592 ] 00:05:59.592 } 00:05:59.592 ] 00:05:59.592 } 00:05:59.592 [2024-11-15 12:41:08.233578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.851 [2024-11-15 12:41:08.262700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.851 [2024-11-15 12:41:08.292234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.851  [2024-11-15T12:41:08.779Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:00.109 00:06:00.109 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:00.109 ************************************ 00:06:00.109 END TEST spdk_dd_bdev_to_bdev 00:06:00.109 ************************************ 00:06:00.109 00:06:00.109 real 0m6.030s 00:06:00.109 user 0m4.570s 00:06:00.109 sys 0m2.798s 00:06:00.109 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.109 12:41:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:00.109 12:41:08 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:00.109 12:41:08 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:00.109 12:41:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.109 12:41:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.109 12:41:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:00.109 ************************************ 00:06:00.109 START TEST spdk_dd_uring 00:06:00.109 ************************************ 00:06:00.109 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:00.109 * Looking for test storage... 00:06:00.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:00.109 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.109 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.109 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:00.369 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.370 --rc genhtml_branch_coverage=1 00:06:00.370 --rc genhtml_function_coverage=1 00:06:00.370 --rc genhtml_legend=1 00:06:00.370 --rc geninfo_all_blocks=1 00:06:00.370 --rc geninfo_unexecuted_blocks=1 00:06:00.370 00:06:00.370 ' 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.370 --rc genhtml_branch_coverage=1 00:06:00.370 --rc genhtml_function_coverage=1 00:06:00.370 --rc genhtml_legend=1 00:06:00.370 --rc geninfo_all_blocks=1 00:06:00.370 --rc geninfo_unexecuted_blocks=1 00:06:00.370 00:06:00.370 ' 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.370 --rc genhtml_branch_coverage=1 00:06:00.370 --rc genhtml_function_coverage=1 00:06:00.370 --rc genhtml_legend=1 00:06:00.370 --rc geninfo_all_blocks=1 00:06:00.370 --rc geninfo_unexecuted_blocks=1 00:06:00.370 00:06:00.370 ' 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.370 --rc genhtml_branch_coverage=1 00:06:00.370 --rc genhtml_function_coverage=1 00:06:00.370 --rc genhtml_legend=1 00:06:00.370 --rc geninfo_all_blocks=1 00:06:00.370 --rc geninfo_unexecuted_blocks=1 00:06:00.370 00:06:00.370 ' 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:00.370 ************************************ 00:06:00.370 START TEST dd_uring_copy 00:06:00.370 ************************************ 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=thmwwhlk0xwijzkp98o4udcin4hst4wjvcipprn9z30u4ing2up0z8g3m96pgpq2xuwlca5dihslzptvvwuv2sipjuhex94qsj79ml78ipv0viz4uyq21jn1a5ycqym8gp7k3gq2mrt7uwy44nuhn4hs98i84anuc6586u9mmgoqttal963xxq2oh8y9jf4h5ak7dqu71cjlssxw63akue35enewqeqryjb1l8hqf5c8oh4urs8oiwl7uknvmj8bx745m8jbb4aguwxoir0eoxtrqcess0q16xzu6g6e0pv9v62a72k8trp7bvgj0mh4nvogj4whi80q0mtbn2kfluzzcov9cy5covmgvkoken5kr5rwcng3kq6ffd5ezdc7bdkpfqlyfb2671dzskmfp2961pmqc8betbhdvi5lzcbfi3qvuvhhkms4wemf16t8cyq2oseaeylax40102u8jhdoksbjx36uynmz76iu97z2ygh2tit3eytlvbu1k4kv49flitqbmyikqh01vva1sv39mppmprcse0lnwfe0m1x0g40m0m1p3qkmoe4eevh0k1fupd08hjmg3kjic9i5iul18dnb4gvd1bi8wlvdhbkuqmatwjpe5vo5cm2mckm3omzia9scw0ngcy7mmbtsv0edbuogmr16ip9z1opur51bpu1reu9lx2iu6zo8ob2461qo8bxjyawvmqui8jseif7fmoqcbupwoehea97fea3nes4pbcdan2js0u7mzd1l5xa0g1n8nff60njg6ybcw7k73blrtsywfdprgj4qse4hrbixzrrvt6scsy4pin3esz5v4vtp8rdsnbppsyr2uzvc2pq9jeqg4jhpbg25i3rr79hc3lizf8oml9kdqms7hsk4wlnkhoory82v3qblpsxznj4n5q9dflj37oqa30sh4ur61di08s5llgnso29oejoxzvscwrcvfzem4liaxwnghmq5gx1xthfues0cw7enttllk4rphcz4s13tpah0 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo thmwwhlk0xwijzkp98o4udcin4hst4wjvcipprn9z30u4ing2up0z8g3m96pgpq2xuwlca5dihslzptvvwuv2sipjuhex94qsj79ml78ipv0viz4uyq21jn1a5ycqym8gp7k3gq2mrt7uwy44nuhn4hs98i84anuc6586u9mmgoqttal963xxq2oh8y9jf4h5ak7dqu71cjlssxw63akue35enewqeqryjb1l8hqf5c8oh4urs8oiwl7uknvmj8bx745m8jbb4aguwxoir0eoxtrqcess0q16xzu6g6e0pv9v62a72k8trp7bvgj0mh4nvogj4whi80q0mtbn2kfluzzcov9cy5covmgvkoken5kr5rwcng3kq6ffd5ezdc7bdkpfqlyfb2671dzskmfp2961pmqc8betbhdvi5lzcbfi3qvuvhhkms4wemf16t8cyq2oseaeylax40102u8jhdoksbjx36uynmz76iu97z2ygh2tit3eytlvbu1k4kv49flitqbmyikqh01vva1sv39mppmprcse0lnwfe0m1x0g40m0m1p3qkmoe4eevh0k1fupd08hjmg3kjic9i5iul18dnb4gvd1bi8wlvdhbkuqmatwjpe5vo5cm2mckm3omzia9scw0ngcy7mmbtsv0edbuogmr16ip9z1opur51bpu1reu9lx2iu6zo8ob2461qo8bxjyawvmqui8jseif7fmoqcbupwoehea97fea3nes4pbcdan2js0u7mzd1l5xa0g1n8nff60njg6ybcw7k73blrtsywfdprgj4qse4hrbixzrrvt6scsy4pin3esz5v4vtp8rdsnbppsyr2uzvc2pq9jeqg4jhpbg25i3rr79hc3lizf8oml9kdqms7hsk4wlnkhoory82v3qblpsxznj4n5q9dflj37oqa30sh4ur61di08s5llgnso29oejoxzvscwrcvfzem4liaxwnghmq5gx1xthfues0cw7enttllk4rphcz4s13tpah0 00:06:00.370 12:41:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:00.370 [2024-11-15 12:41:08.945004] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:00.370 [2024-11-15 12:41:08.945258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60900 ] 00:06:00.629 [2024-11-15 12:41:09.081842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.629 [2024-11-15 12:41:09.110022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.630 [2024-11-15 12:41:09.137207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.197  [2024-11-15T12:41:09.867Z] Copying: 511/511 [MB] (average 1689 MBps) 00:06:01.197 00:06:01.197 12:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:01.197 12:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:01.197 12:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:01.197 12:41:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:01.197 [2024-11-15 12:41:09.827993] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:01.197 [2024-11-15 12:41:09.828062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:06:01.197 { 00:06:01.197 "subsystems": [ 00:06:01.197 { 00:06:01.197 "subsystem": "bdev", 00:06:01.197 "config": [ 00:06:01.197 { 00:06:01.197 "params": { 00:06:01.197 "block_size": 512, 00:06:01.197 "num_blocks": 1048576, 00:06:01.197 "name": "malloc0" 00:06:01.197 }, 00:06:01.197 "method": "bdev_malloc_create" 00:06:01.197 }, 00:06:01.197 { 00:06:01.197 "params": { 00:06:01.197 "filename": "/dev/zram1", 00:06:01.197 "name": "uring0" 00:06:01.197 }, 00:06:01.197 "method": "bdev_uring_create" 00:06:01.197 }, 00:06:01.197 { 00:06:01.197 "method": "bdev_wait_for_examine" 00:06:01.197 } 00:06:01.197 ] 00:06:01.197 } 00:06:01.197 ] 00:06:01.197 } 00:06:01.455 [2024-11-15 12:41:09.957461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.455 [2024-11-15 12:41:09.983864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.455 [2024-11-15 12:41:10.014151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.833  [2024-11-15T12:41:12.071Z] Copying: 270/512 [MB] (270 MBps) [2024-11-15T12:41:12.331Z] Copying: 512/512 [MB] (average 274 MBps) 00:06:03.661 00:06:03.661 12:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:03.661 12:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:03.661 12:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:03.661 12:41:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:03.661 [2024-11-15 12:41:12.262936] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:03.661 [2024-11-15 12:41:12.263030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:06:03.661 { 00:06:03.661 "subsystems": [ 00:06:03.661 { 00:06:03.661 "subsystem": "bdev", 00:06:03.661 "config": [ 00:06:03.661 { 00:06:03.661 "params": { 00:06:03.661 "block_size": 512, 00:06:03.661 "num_blocks": 1048576, 00:06:03.661 "name": "malloc0" 00:06:03.661 }, 00:06:03.661 "method": "bdev_malloc_create" 00:06:03.661 }, 00:06:03.661 { 00:06:03.661 "params": { 00:06:03.661 "filename": "/dev/zram1", 00:06:03.661 "name": "uring0" 00:06:03.661 }, 00:06:03.661 "method": "bdev_uring_create" 00:06:03.661 }, 00:06:03.661 { 00:06:03.661 "method": "bdev_wait_for_examine" 00:06:03.661 } 00:06:03.661 ] 00:06:03.661 } 00:06:03.661 ] 00:06:03.661 } 00:06:03.920 [2024-11-15 12:41:12.405822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.920 [2024-11-15 12:41:12.436566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.920 [2024-11-15 12:41:12.469127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.299  [2024-11-15T12:41:14.910Z] Copying: 201/512 [MB] (201 MBps) [2024-11-15T12:41:15.170Z] Copying: 392/512 [MB] (191 MBps) [2024-11-15T12:41:15.430Z] Copying: 512/512 [MB] (average 199 MBps) 00:06:06.760 00:06:06.760 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:06.760 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ thmwwhlk0xwijzkp98o4udcin4hst4wjvcipprn9z30u4ing2up0z8g3m96pgpq2xuwlca5dihslzptvvwuv2sipjuhex94qsj79ml78ipv0viz4uyq21jn1a5ycqym8gp7k3gq2mrt7uwy44nuhn4hs98i84anuc6586u9mmgoqttal963xxq2oh8y9jf4h5ak7dqu71cjlssxw63akue35enewqeqryjb1l8hqf5c8oh4urs8oiwl7uknvmj8bx745m8jbb4aguwxoir0eoxtrqcess0q16xzu6g6e0pv9v62a72k8trp7bvgj0mh4nvogj4whi80q0mtbn2kfluzzcov9cy5covmgvkoken5kr5rwcng3kq6ffd5ezdc7bdkpfqlyfb2671dzskmfp2961pmqc8betbhdvi5lzcbfi3qvuvhhkms4wemf16t8cyq2oseaeylax40102u8jhdoksbjx36uynmz76iu97z2ygh2tit3eytlvbu1k4kv49flitqbmyikqh01vva1sv39mppmprcse0lnwfe0m1x0g40m0m1p3qkmoe4eevh0k1fupd08hjmg3kjic9i5iul18dnb4gvd1bi8wlvdhbkuqmatwjpe5vo5cm2mckm3omzia9scw0ngcy7mmbtsv0edbuogmr16ip9z1opur51bpu1reu9lx2iu6zo8ob2461qo8bxjyawvmqui8jseif7fmoqcbupwoehea97fea3nes4pbcdan2js0u7mzd1l5xa0g1n8nff60njg6ybcw7k73blrtsywfdprgj4qse4hrbixzrrvt6scsy4pin3esz5v4vtp8rdsnbppsyr2uzvc2pq9jeqg4jhpbg25i3rr79hc3lizf8oml9kdqms7hsk4wlnkhoory82v3qblpsxznj4n5q9dflj37oqa30sh4ur61di08s5llgnso29oejoxzvscwrcvfzem4liaxwnghmq5gx1xthfues0cw7enttllk4rphcz4s13tpah0 == \t\h\m\w\w\h\l\k\0\x\w\i\j\z\k\p\9\8\o\4\u\d\c\i\n\4\h\s\t\4\w\j\v\c\i\p\p\r\n\9\z\3\0\u\4\i\n\g\2\u\p\0\z\8\g\3\m\9\6\p\g\p\q\2\x\u\w\l\c\a\5\d\i\h\s\l\z\p\t\v\v\w\u\v\2\s\i\p\j\u\h\e\x\9\4\q\s\j\7\9\m\l\7\8\i\p\v\0\v\i\z\4\u\y\q\2\1\j\n\1\a\5\y\c\q\y\m\8\g\p\7\k\3\g\q\2\m\r\t\7\u\w\y\4\4\n\u\h\n\4\h\s\9\8\i\8\4\a\n\u\c\6\5\8\6\u\9\m\m\g\o\q\t\t\a\l\9\6\3\x\x\q\2\o\h\8\y\9\j\f\4\h\5\a\k\7\d\q\u\7\1\c\j\l\s\s\x\w\6\3\a\k\u\e\3\5\e\n\e\w\q\e\q\r\y\j\b\1\l\8\h\q\f\5\c\8\o\h\4\u\r\s\8\o\i\w\l\7\u\k\n\v\m\j\8\b\x\7\4\5\m\8\j\b\b\4\a\g\u\w\x\o\i\r\0\e\o\x\t\r\q\c\e\s\s\0\q\1\6\x\z\u\6\g\6\e\0\p\v\9\v\6\2\a\7\2\k\8\t\r\p\7\b\v\g\j\0\m\h\4\n\v\o\g\j\4\w\h\i\8\0\q\0\m\t\b\n\2\k\f\l\u\z\z\c\o\v\9\c\y\5\c\o\v\m\g\v\k\o\k\e\n\5\k\r\5\r\w\c\n\g\3\k\q\6\f\f\d\5\e\z\d\c\7\b\d\k\p\f\q\l\y\f\b\2\6\7\1\d\z\s\k\m\f\p\2\9\6\1\p\m\q\c\8\b\e\t\b\h\d\v\i\5\l\z\c\b\f\i\3\q\v\u\v\h\h\k\m\s\4\w\e\m\f\1\6\t\8\c\y\q\2\o\s\e\a\e\y\l\a\x\4\0\1\0\2\u\8\j\h\d\o\k\s\b\j\x\3\6\u\y\n\m\z\7\6\i\u\9\7\z\2\y\g\h\2\t\i\t\3\e\y\t\l\v\b\u\1\k\4\k\v\4\9\f\l\i\t\q\b\m\y\i\k\q\h\0\1\v\v\a\1\s\v\3\9\m\p\p\m\p\r\c\s\e\0\l\n\w\f\e\0\m\1\x\0\g\4\0\m\0\m\1\p\3\q\k\m\o\e\4\e\e\v\h\0\k\1\f\u\p\d\0\8\h\j\m\g\3\k\j\i\c\9\i\5\i\u\l\1\8\d\n\b\4\g\v\d\1\b\i\8\w\l\v\d\h\b\k\u\q\m\a\t\w\j\p\e\5\v\o\5\c\m\2\m\c\k\m\3\o\m\z\i\a\9\s\c\w\0\n\g\c\y\7\m\m\b\t\s\v\0\e\d\b\u\o\g\m\r\1\6\i\p\9\z\1\o\p\u\r\5\1\b\p\u\1\r\e\u\9\l\x\2\i\u\6\z\o\8\o\b\2\4\6\1\q\o\8\b\x\j\y\a\w\v\m\q\u\i\8\j\s\e\i\f\7\f\m\o\q\c\b\u\p\w\o\e\h\e\a\9\7\f\e\a\3\n\e\s\4\p\b\c\d\a\n\2\j\s\0\u\7\m\z\d\1\l\5\x\a\0\g\1\n\8\n\f\f\6\0\n\j\g\6\y\b\c\w\7\k\7\3\b\l\r\t\s\y\w\f\d\p\r\g\j\4\q\s\e\4\h\r\b\i\x\z\r\r\v\t\6\s\c\s\y\4\p\i\n\3\e\s\z\5\v\4\v\t\p\8\r\d\s\n\b\p\p\s\y\r\2\u\z\v\c\2\p\q\9\j\e\q\g\4\j\h\p\b\g\2\5\i\3\r\r\7\9\h\c\3\l\i\z\f\8\o\m\l\9\k\d\q\m\s\7\h\s\k\4\w\l\n\k\h\o\o\r\y\8\2\v\3\q\b\l\p\s\x\z\n\j\4\n\5\q\9\d\f\l\j\3\7\o\q\a\3\0\s\h\4\u\r\6\1\d\i\0\8\s\5\l\l\g\n\s\o\2\9\o\e\j\o\x\z\v\s\c\w\r\c\v\f\z\e\m\4\l\i\a\x\w\n\g\h\m\q\5\g\x\1\x\t\h\f\u\e\s\0\c\w\7\e\n\t\t\l\l\k\4\r\p\h\c\z\4\s\1\3\t\p\a\h\0 ]] 00:06:06.760 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:06.760 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ thmwwhlk0xwijzkp98o4udcin4hst4wjvcipprn9z30u4ing2up0z8g3m96pgpq2xuwlca5dihslzptvvwuv2sipjuhex94qsj79ml78ipv0viz4uyq21jn1a5ycqym8gp7k3gq2mrt7uwy44nuhn4hs98i84anuc6586u9mmgoqttal963xxq2oh8y9jf4h5ak7dqu71cjlssxw63akue35enewqeqryjb1l8hqf5c8oh4urs8oiwl7uknvmj8bx745m8jbb4aguwxoir0eoxtrqcess0q16xzu6g6e0pv9v62a72k8trp7bvgj0mh4nvogj4whi80q0mtbn2kfluzzcov9cy5covmgvkoken5kr5rwcng3kq6ffd5ezdc7bdkpfqlyfb2671dzskmfp2961pmqc8betbhdvi5lzcbfi3qvuvhhkms4wemf16t8cyq2oseaeylax40102u8jhdoksbjx36uynmz76iu97z2ygh2tit3eytlvbu1k4kv49flitqbmyikqh01vva1sv39mppmprcse0lnwfe0m1x0g40m0m1p3qkmoe4eevh0k1fupd08hjmg3kjic9i5iul18dnb4gvd1bi8wlvdhbkuqmatwjpe5vo5cm2mckm3omzia9scw0ngcy7mmbtsv0edbuogmr16ip9z1opur51bpu1reu9lx2iu6zo8ob2461qo8bxjyawvmqui8jseif7fmoqcbupwoehea97fea3nes4pbcdan2js0u7mzd1l5xa0g1n8nff60njg6ybcw7k73blrtsywfdprgj4qse4hrbixzrrvt6scsy4pin3esz5v4vtp8rdsnbppsyr2uzvc2pq9jeqg4jhpbg25i3rr79hc3lizf8oml9kdqms7hsk4wlnkhoory82v3qblpsxznj4n5q9dflj37oqa30sh4ur61di08s5llgnso29oejoxzvscwrcvfzem4liaxwnghmq5gx1xthfues0cw7enttllk4rphcz4s13tpah0 == \t\h\m\w\w\h\l\k\0\x\w\i\j\z\k\p\9\8\o\4\u\d\c\i\n\4\h\s\t\4\w\j\v\c\i\p\p\r\n\9\z\3\0\u\4\i\n\g\2\u\p\0\z\8\g\3\m\9\6\p\g\p\q\2\x\u\w\l\c\a\5\d\i\h\s\l\z\p\t\v\v\w\u\v\2\s\i\p\j\u\h\e\x\9\4\q\s\j\7\9\m\l\7\8\i\p\v\0\v\i\z\4\u\y\q\2\1\j\n\1\a\5\y\c\q\y\m\8\g\p\7\k\3\g\q\2\m\r\t\7\u\w\y\4\4\n\u\h\n\4\h\s\9\8\i\8\4\a\n\u\c\6\5\8\6\u\9\m\m\g\o\q\t\t\a\l\9\6\3\x\x\q\2\o\h\8\y\9\j\f\4\h\5\a\k\7\d\q\u\7\1\c\j\l\s\s\x\w\6\3\a\k\u\e\3\5\e\n\e\w\q\e\q\r\y\j\b\1\l\8\h\q\f\5\c\8\o\h\4\u\r\s\8\o\i\w\l\7\u\k\n\v\m\j\8\b\x\7\4\5\m\8\j\b\b\4\a\g\u\w\x\o\i\r\0\e\o\x\t\r\q\c\e\s\s\0\q\1\6\x\z\u\6\g\6\e\0\p\v\9\v\6\2\a\7\2\k\8\t\r\p\7\b\v\g\j\0\m\h\4\n\v\o\g\j\4\w\h\i\8\0\q\0\m\t\b\n\2\k\f\l\u\z\z\c\o\v\9\c\y\5\c\o\v\m\g\v\k\o\k\e\n\5\k\r\5\r\w\c\n\g\3\k\q\6\f\f\d\5\e\z\d\c\7\b\d\k\p\f\q\l\y\f\b\2\6\7\1\d\z\s\k\m\f\p\2\9\6\1\p\m\q\c\8\b\e\t\b\h\d\v\i\5\l\z\c\b\f\i\3\q\v\u\v\h\h\k\m\s\4\w\e\m\f\1\6\t\8\c\y\q\2\o\s\e\a\e\y\l\a\x\4\0\1\0\2\u\8\j\h\d\o\k\s\b\j\x\3\6\u\y\n\m\z\7\6\i\u\9\7\z\2\y\g\h\2\t\i\t\3\e\y\t\l\v\b\u\1\k\4\k\v\4\9\f\l\i\t\q\b\m\y\i\k\q\h\0\1\v\v\a\1\s\v\3\9\m\p\p\m\p\r\c\s\e\0\l\n\w\f\e\0\m\1\x\0\g\4\0\m\0\m\1\p\3\q\k\m\o\e\4\e\e\v\h\0\k\1\f\u\p\d\0\8\h\j\m\g\3\k\j\i\c\9\i\5\i\u\l\1\8\d\n\b\4\g\v\d\1\b\i\8\w\l\v\d\h\b\k\u\q\m\a\t\w\j\p\e\5\v\o\5\c\m\2\m\c\k\m\3\o\m\z\i\a\9\s\c\w\0\n\g\c\y\7\m\m\b\t\s\v\0\e\d\b\u\o\g\m\r\1\6\i\p\9\z\1\o\p\u\r\5\1\b\p\u\1\r\e\u\9\l\x\2\i\u\6\z\o\8\o\b\2\4\6\1\q\o\8\b\x\j\y\a\w\v\m\q\u\i\8\j\s\e\i\f\7\f\m\o\q\c\b\u\p\w\o\e\h\e\a\9\7\f\e\a\3\n\e\s\4\p\b\c\d\a\n\2\j\s\0\u\7\m\z\d\1\l\5\x\a\0\g\1\n\8\n\f\f\6\0\n\j\g\6\y\b\c\w\7\k\7\3\b\l\r\t\s\y\w\f\d\p\r\g\j\4\q\s\e\4\h\r\b\i\x\z\r\r\v\t\6\s\c\s\y\4\p\i\n\3\e\s\z\5\v\4\v\t\p\8\r\d\s\n\b\p\p\s\y\r\2\u\z\v\c\2\p\q\9\j\e\q\g\4\j\h\p\b\g\2\5\i\3\r\r\7\9\h\c\3\l\i\z\f\8\o\m\l\9\k\d\q\m\s\7\h\s\k\4\w\l\n\k\h\o\o\r\y\8\2\v\3\q\b\l\p\s\x\z\n\j\4\n\5\q\9\d\f\l\j\3\7\o\q\a\3\0\s\h\4\u\r\6\1\d\i\0\8\s\5\l\l\g\n\s\o\2\9\o\e\j\o\x\z\v\s\c\w\r\c\v\f\z\e\m\4\l\i\a\x\w\n\g\h\m\q\5\g\x\1\x\t\h\f\u\e\s\0\c\w\7\e\n\t\t\l\l\k\4\r\p\h\c\z\4\s\1\3\t\p\a\h\0 ]] 00:06:06.760 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:07.019 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:07.019 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:07.019 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:07.019 12:41:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:07.278 [2024-11-15 12:41:15.696381] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:07.278 [2024-11-15 12:41:15.696466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:06:07.278 { 00:06:07.278 "subsystems": [ 00:06:07.278 { 00:06:07.278 "subsystem": "bdev", 00:06:07.278 "config": [ 00:06:07.278 { 00:06:07.278 "params": { 00:06:07.278 "block_size": 512, 00:06:07.278 "num_blocks": 1048576, 00:06:07.278 "name": "malloc0" 00:06:07.278 }, 00:06:07.278 "method": "bdev_malloc_create" 00:06:07.278 }, 00:06:07.278 { 00:06:07.278 "params": { 00:06:07.278 "filename": "/dev/zram1", 00:06:07.278 "name": "uring0" 00:06:07.278 }, 00:06:07.278 "method": "bdev_uring_create" 00:06:07.278 }, 00:06:07.278 { 00:06:07.278 "method": "bdev_wait_for_examine" 00:06:07.278 } 00:06:07.278 ] 00:06:07.278 } 00:06:07.278 ] 00:06:07.278 } 00:06:07.278 [2024-11-15 12:41:15.833122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.278 [2024-11-15 12:41:15.860457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.278 [2024-11-15 12:41:15.887591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.655  [2024-11-15T12:41:18.261Z] Copying: 184/512 [MB] (184 MBps) [2024-11-15T12:41:18.829Z] Copying: 365/512 [MB] (181 MBps) [2024-11-15T12:41:19.087Z] Copying: 512/512 [MB] (average 182 MBps) 00:06:10.417 00:06:10.417 12:41:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:10.417 12:41:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:10.417 12:41:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:10.417 12:41:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:10.417 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:10.417 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:10.417 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:10.417 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:10.417 [2024-11-15 12:41:19.053875] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:10.417 [2024-11-15 12:41:19.053967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61057 ] 00:06:10.417 { 00:06:10.417 "subsystems": [ 00:06:10.417 { 00:06:10.418 "subsystem": "bdev", 00:06:10.418 "config": [ 00:06:10.418 { 00:06:10.418 "params": { 00:06:10.418 "block_size": 512, 00:06:10.418 "num_blocks": 1048576, 00:06:10.418 "name": "malloc0" 00:06:10.418 }, 00:06:10.418 "method": "bdev_malloc_create" 00:06:10.418 }, 00:06:10.418 { 00:06:10.418 "params": { 00:06:10.418 "filename": "/dev/zram1", 00:06:10.418 "name": "uring0" 00:06:10.418 }, 00:06:10.418 "method": "bdev_uring_create" 00:06:10.418 }, 00:06:10.418 { 00:06:10.418 "params": { 00:06:10.418 "name": "uring0" 00:06:10.418 }, 00:06:10.418 "method": "bdev_uring_delete" 00:06:10.418 }, 00:06:10.418 { 00:06:10.418 "method": "bdev_wait_for_examine" 00:06:10.418 } 00:06:10.418 ] 00:06:10.418 } 00:06:10.418 ] 00:06:10.418 } 00:06:10.675 [2024-11-15 12:41:19.196794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.675 [2024-11-15 12:41:19.225911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.675 [2024-11-15 12:41:19.256810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.934  [2024-11-15T12:41:19.863Z] Copying: 0/0 [B] (average 0 Bps) 00:06:11.193 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:11.193 12:41:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:11.193 [2024-11-15 12:41:19.677968] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:11.193 [2024-11-15 12:41:19.678307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61083 ] 00:06:11.193 { 00:06:11.193 "subsystems": [ 00:06:11.193 { 00:06:11.193 "subsystem": "bdev", 00:06:11.193 "config": [ 00:06:11.193 { 00:06:11.193 "params": { 00:06:11.193 "block_size": 512, 00:06:11.193 "num_blocks": 1048576, 00:06:11.193 "name": "malloc0" 00:06:11.193 }, 00:06:11.193 "method": "bdev_malloc_create" 00:06:11.193 }, 00:06:11.193 { 00:06:11.193 "params": { 00:06:11.193 "filename": "/dev/zram1", 00:06:11.193 "name": "uring0" 00:06:11.193 }, 00:06:11.193 "method": "bdev_uring_create" 00:06:11.193 }, 00:06:11.193 { 00:06:11.193 "params": { 00:06:11.193 "name": "uring0" 00:06:11.193 }, 00:06:11.193 "method": "bdev_uring_delete" 00:06:11.193 }, 00:06:11.193 { 00:06:11.193 "method": "bdev_wait_for_examine" 00:06:11.193 } 00:06:11.193 ] 00:06:11.193 } 00:06:11.193 ] 00:06:11.193 } 00:06:11.193 [2024-11-15 12:41:19.820985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.193 [2024-11-15 12:41:19.848028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.451 [2024-11-15 12:41:19.876064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.451 [2024-11-15 12:41:19.993282] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:11.451 [2024-11-15 12:41:19.993381] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:11.451 [2024-11-15 12:41:19.993392] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:11.451 [2024-11-15 12:41:19.993402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.710 [2024-11-15 12:41:20.154949] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:11.710 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:11.969 00:06:11.969 real 0m11.626s 00:06:11.969 user 0m7.776s 00:06:11.969 sys 0m10.124s 00:06:11.969 ************************************ 00:06:11.969 END TEST dd_uring_copy 00:06:11.969 ************************************ 00:06:11.969 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.969 12:41:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:11.969 ************************************ 00:06:11.969 END TEST spdk_dd_uring 00:06:11.969 ************************************ 00:06:11.969 00:06:11.969 real 0m11.871s 00:06:11.969 user 0m7.912s 00:06:11.969 sys 0m10.232s 00:06:11.969 12:41:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.969 12:41:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:11.969 12:41:20 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:11.969 12:41:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.969 12:41:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.969 12:41:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:11.969 ************************************ 00:06:11.969 START TEST spdk_dd_sparse 00:06:11.969 ************************************ 00:06:11.969 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:12.228 * Looking for test storage... 00:06:12.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.228 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.229 --rc genhtml_branch_coverage=1 00:06:12.229 --rc genhtml_function_coverage=1 00:06:12.229 --rc genhtml_legend=1 00:06:12.229 --rc geninfo_all_blocks=1 00:06:12.229 --rc geninfo_unexecuted_blocks=1 00:06:12.229 00:06:12.229 ' 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.229 --rc genhtml_branch_coverage=1 00:06:12.229 --rc genhtml_function_coverage=1 00:06:12.229 --rc genhtml_legend=1 00:06:12.229 --rc geninfo_all_blocks=1 00:06:12.229 --rc geninfo_unexecuted_blocks=1 00:06:12.229 00:06:12.229 ' 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.229 --rc genhtml_branch_coverage=1 00:06:12.229 --rc genhtml_function_coverage=1 00:06:12.229 --rc genhtml_legend=1 00:06:12.229 --rc geninfo_all_blocks=1 00:06:12.229 --rc geninfo_unexecuted_blocks=1 00:06:12.229 00:06:12.229 ' 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.229 --rc genhtml_branch_coverage=1 00:06:12.229 --rc genhtml_function_coverage=1 00:06:12.229 --rc genhtml_legend=1 00:06:12.229 --rc geninfo_all_blocks=1 00:06:12.229 --rc geninfo_unexecuted_blocks=1 00:06:12.229 00:06:12.229 ' 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:12.229 1+0 records in 00:06:12.229 1+0 records out 00:06:12.229 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00571319 s, 734 MB/s 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:12.229 1+0 records in 00:06:12.229 1+0 records out 00:06:12.229 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00608817 s, 689 MB/s 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:12.229 1+0 records in 00:06:12.229 1+0 records out 00:06:12.229 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00629047 s, 667 MB/s 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:12.229 ************************************ 00:06:12.229 START TEST dd_sparse_file_to_file 00:06:12.229 ************************************ 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:12.229 12:41:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:12.229 [2024-11-15 12:41:20.853928] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:12.229 [2024-11-15 12:41:20.854173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61178 ] 00:06:12.229 { 00:06:12.229 "subsystems": [ 00:06:12.229 { 00:06:12.229 "subsystem": "bdev", 00:06:12.229 "config": [ 00:06:12.229 { 00:06:12.229 "params": { 00:06:12.229 "block_size": 4096, 00:06:12.229 "filename": "dd_sparse_aio_disk", 00:06:12.229 "name": "dd_aio" 00:06:12.229 }, 00:06:12.229 "method": "bdev_aio_create" 00:06:12.229 }, 00:06:12.229 { 00:06:12.229 "params": { 00:06:12.229 "lvs_name": "dd_lvstore", 00:06:12.229 "bdev_name": "dd_aio" 00:06:12.229 }, 00:06:12.229 "method": "bdev_lvol_create_lvstore" 00:06:12.229 }, 00:06:12.229 { 00:06:12.229 "method": "bdev_wait_for_examine" 00:06:12.229 } 00:06:12.229 ] 00:06:12.229 } 00:06:12.229 ] 00:06:12.229 } 00:06:12.488 [2024-11-15 12:41:20.997514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.488 [2024-11-15 12:41:21.026237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.488 [2024-11-15 12:41:21.058912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.488  [2024-11-15T12:41:21.417Z] Copying: 12/36 [MB] (average 1000 MBps) 00:06:12.747 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:12.747 00:06:12.747 real 0m0.513s 00:06:12.747 user 0m0.308s 00:06:12.747 sys 0m0.244s 00:06:12.747 ************************************ 00:06:12.747 END TEST dd_sparse_file_to_file 00:06:12.747 ************************************ 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.747 12:41:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:12.747 ************************************ 00:06:12.748 START TEST dd_sparse_file_to_bdev 00:06:12.748 ************************************ 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:12.748 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:12.748 [2024-11-15 12:41:21.413560] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:12.748 [2024-11-15 12:41:21.413718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61226 ] 00:06:12.748 { 00:06:12.748 "subsystems": [ 00:06:12.748 { 00:06:12.748 "subsystem": "bdev", 00:06:12.748 "config": [ 00:06:12.748 { 00:06:12.748 "params": { 00:06:12.748 "block_size": 4096, 00:06:12.748 "filename": "dd_sparse_aio_disk", 00:06:12.748 "name": "dd_aio" 00:06:12.748 }, 00:06:12.748 "method": "bdev_aio_create" 00:06:12.748 }, 00:06:12.748 { 00:06:12.748 "params": { 00:06:12.748 "lvs_name": "dd_lvstore", 00:06:12.748 "lvol_name": "dd_lvol", 00:06:12.748 "size_in_mib": 36, 00:06:12.748 "thin_provision": true 00:06:12.748 }, 00:06:12.748 "method": "bdev_lvol_create" 00:06:12.748 }, 00:06:12.748 { 00:06:12.748 "method": "bdev_wait_for_examine" 00:06:12.748 } 00:06:12.748 ] 00:06:12.748 } 00:06:12.748 ] 00:06:12.748 } 00:06:13.007 [2024-11-15 12:41:21.556284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.007 [2024-11-15 12:41:21.585472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.007 [2024-11-15 12:41:21.615405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.266  [2024-11-15T12:41:21.936Z] Copying: 12/36 [MB] (average 480 MBps) 00:06:13.266 00:06:13.266 00:06:13.266 real 0m0.463s 00:06:13.266 user 0m0.283s 00:06:13.266 sys 0m0.226s 00:06:13.266 ************************************ 00:06:13.266 END TEST dd_sparse_file_to_bdev 00:06:13.266 ************************************ 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:13.266 ************************************ 00:06:13.266 START TEST dd_sparse_bdev_to_file 00:06:13.266 ************************************ 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:13.266 12:41:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:13.267 [2024-11-15 12:41:21.928247] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:13.267 [2024-11-15 12:41:21.928492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61253 ] 00:06:13.267 { 00:06:13.267 "subsystems": [ 00:06:13.267 { 00:06:13.267 "subsystem": "bdev", 00:06:13.267 "config": [ 00:06:13.267 { 00:06:13.267 "params": { 00:06:13.267 "block_size": 4096, 00:06:13.267 "filename": "dd_sparse_aio_disk", 00:06:13.267 "name": "dd_aio" 00:06:13.267 }, 00:06:13.267 "method": "bdev_aio_create" 00:06:13.267 }, 00:06:13.267 { 00:06:13.267 "method": "bdev_wait_for_examine" 00:06:13.267 } 00:06:13.267 ] 00:06:13.267 } 00:06:13.267 ] 00:06:13.267 } 00:06:13.526 [2024-11-15 12:41:22.073215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.526 [2024-11-15 12:41:22.107773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.526 [2024-11-15 12:41:22.144526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.785  [2024-11-15T12:41:22.455Z] Copying: 12/36 [MB] (average 1090 MBps) 00:06:13.785 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:13.785 00:06:13.785 real 0m0.502s 00:06:13.785 user 0m0.314s 00:06:13.785 sys 0m0.238s 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.785 ************************************ 00:06:13.785 END TEST dd_sparse_bdev_to_file 00:06:13.785 ************************************ 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:13.785 ************************************ 00:06:13.785 END TEST spdk_dd_sparse 00:06:13.785 ************************************ 00:06:13.785 00:06:13.785 real 0m1.861s 00:06:13.785 user 0m1.056s 00:06:13.785 sys 0m0.930s 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.785 12:41:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:14.045 12:41:22 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:14.045 12:41:22 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.045 12:41:22 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.045 12:41:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:14.045 ************************************ 00:06:14.045 START TEST spdk_dd_negative 00:06:14.045 ************************************ 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:14.045 * Looking for test storage... 00:06:14.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.045 --rc genhtml_branch_coverage=1 00:06:14.045 --rc genhtml_function_coverage=1 00:06:14.045 --rc genhtml_legend=1 00:06:14.045 --rc geninfo_all_blocks=1 00:06:14.045 --rc geninfo_unexecuted_blocks=1 00:06:14.045 00:06:14.045 ' 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.045 --rc genhtml_branch_coverage=1 00:06:14.045 --rc genhtml_function_coverage=1 00:06:14.045 --rc genhtml_legend=1 00:06:14.045 --rc geninfo_all_blocks=1 00:06:14.045 --rc geninfo_unexecuted_blocks=1 00:06:14.045 00:06:14.045 ' 00:06:14.045 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.045 --rc genhtml_branch_coverage=1 00:06:14.045 --rc genhtml_function_coverage=1 00:06:14.046 --rc genhtml_legend=1 00:06:14.046 --rc geninfo_all_blocks=1 00:06:14.046 --rc geninfo_unexecuted_blocks=1 00:06:14.046 00:06:14.046 ' 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.046 --rc genhtml_branch_coverage=1 00:06:14.046 --rc genhtml_function_coverage=1 00:06:14.046 --rc genhtml_legend=1 00:06:14.046 --rc geninfo_all_blocks=1 00:06:14.046 --rc geninfo_unexecuted_blocks=1 00:06:14.046 00:06:14.046 ' 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.046 ************************************ 00:06:14.046 START TEST dd_invalid_arguments 00:06:14.046 ************************************ 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.046 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:14.306 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:14.306 00:06:14.306 CPU options: 00:06:14.306 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:14.306 (like [0,1,10]) 00:06:14.306 --lcores lcore to CPU mapping list. The list is in the format: 00:06:14.306 [<,lcores[@CPUs]>...] 00:06:14.306 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:14.306 Within the group, '-' is used for range separator, 00:06:14.306 ',' is used for single number separator. 00:06:14.306 '( )' can be omitted for single element group, 00:06:14.306 '@' can be omitted if cpus and lcores have the same value 00:06:14.306 --disable-cpumask-locks Disable CPU core lock files. 00:06:14.306 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:14.306 pollers in the app support interrupt mode) 00:06:14.306 -p, --main-core main (primary) core for DPDK 00:06:14.306 00:06:14.306 Configuration options: 00:06:14.306 -c, --config, --json JSON config file 00:06:14.306 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:14.306 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:14.306 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:14.306 --rpcs-allowed comma-separated list of permitted RPCS 00:06:14.306 --json-ignore-init-errors don't exit on invalid config entry 00:06:14.306 00:06:14.306 Memory options: 00:06:14.306 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:14.306 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:14.306 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:14.306 -R, --huge-unlink unlink huge files after initialization 00:06:14.306 -n, --mem-channels number of memory channels used for DPDK 00:06:14.306 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:14.306 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:14.306 --no-huge run without using hugepages 00:06:14.306 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:14.306 -i, --shm-id shared memory ID (optional) 00:06:14.306 -g, --single-file-segments force creating just one hugetlbfs file 00:06:14.306 00:06:14.306 PCI options: 00:06:14.306 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:14.306 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:14.306 -u, --no-pci disable PCI access 00:06:14.306 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:14.306 00:06:14.306 Log options: 00:06:14.306 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:14.306 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:14.306 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:14.306 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:14.306 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:14.306 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:14.306 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:14.306 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:14.306 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:14.306 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:14.306 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:14.306 --silence-noticelog disable notice level logging to stderr 00:06:14.306 00:06:14.306 Trace options: 00:06:14.306 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:14.306 setting 0 to disable trace (default 32768) 00:06:14.306 Tracepoints vary in size and can use more than one trace entry. 00:06:14.306 -e, --tpoint-group [:] 00:06:14.306 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:14.306 [2024-11-15 12:41:22.742994] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:14.306 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:14.306 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:14.306 bdev_raid, scheduler, all). 00:06:14.306 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:14.306 a tracepoint group. First tpoint inside a group can be enabled by 00:06:14.306 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:14.306 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:14.306 in /include/spdk_internal/trace_defs.h 00:06:14.306 00:06:14.306 Other options: 00:06:14.306 -h, --help show this usage 00:06:14.306 -v, --version print SPDK version 00:06:14.306 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:14.306 --env-context Opaque context for use of the env implementation 00:06:14.306 00:06:14.306 Application specific: 00:06:14.306 [--------- DD Options ---------] 00:06:14.306 --if Input file. Must specify either --if or --ib. 00:06:14.306 --ib Input bdev. Must specifier either --if or --ib 00:06:14.306 --of Output file. Must specify either --of or --ob. 00:06:14.306 --ob Output bdev. Must specify either --of or --ob. 00:06:14.306 --iflag Input file flags. 00:06:14.306 --oflag Output file flags. 00:06:14.306 --bs I/O unit size (default: 4096) 00:06:14.306 --qd Queue depth (default: 2) 00:06:14.306 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:14.306 --skip Skip this many I/O units at start of input. (default: 0) 00:06:14.306 --seek Skip this many I/O units at start of output. (default: 0) 00:06:14.306 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:14.306 --sparse Enable hole skipping in input target 00:06:14.306 Available iflag and oflag values: 00:06:14.306 append - append mode 00:06:14.306 direct - use direct I/O for data 00:06:14.306 directory - fail unless a directory 00:06:14.306 dsync - use synchronized I/O for data 00:06:14.306 noatime - do not update access time 00:06:14.306 noctty - do not assign controlling terminal from file 00:06:14.306 nofollow - do not follow symlinks 00:06:14.306 nonblock - use non-blocking I/O 00:06:14.306 sync - use synchronized I/O for data and metadata 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.306 00:06:14.306 real 0m0.066s 00:06:14.306 user 0m0.042s 00:06:14.306 sys 0m0.021s 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:14.306 ************************************ 00:06:14.306 END TEST dd_invalid_arguments 00:06:14.306 ************************************ 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.306 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.306 ************************************ 00:06:14.306 START TEST dd_double_input 00:06:14.307 ************************************ 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:14.307 [2024-11-15 12:41:22.866655] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.307 ************************************ 00:06:14.307 END TEST dd_double_input 00:06:14.307 ************************************ 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.307 00:06:14.307 real 0m0.074s 00:06:14.307 user 0m0.045s 00:06:14.307 sys 0m0.028s 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.307 ************************************ 00:06:14.307 START TEST dd_double_output 00:06:14.307 ************************************ 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.307 12:41:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:14.566 [2024-11-15 12:41:22.991590] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.566 ************************************ 00:06:14.566 END TEST dd_double_output 00:06:14.566 ************************************ 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.566 00:06:14.566 real 0m0.064s 00:06:14.566 user 0m0.040s 00:06:14.566 sys 0m0.023s 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.566 ************************************ 00:06:14.566 START TEST dd_no_input 00:06:14.566 ************************************ 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:14.566 [2024-11-15 12:41:23.112219] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.566 00:06:14.566 real 0m0.072s 00:06:14.566 user 0m0.050s 00:06:14.566 sys 0m0.021s 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:14.566 ************************************ 00:06:14.566 END TEST dd_no_input 00:06:14.566 ************************************ 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.566 ************************************ 00:06:14.566 START TEST dd_no_output 00:06:14.566 ************************************ 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:14.566 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.567 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.825 [2024-11-15 12:41:23.236954] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.825 ************************************ 00:06:14.825 END TEST dd_no_output 00:06:14.825 ************************************ 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.825 00:06:14.825 real 0m0.073s 00:06:14.825 user 0m0.048s 00:06:14.825 sys 0m0.022s 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.825 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.826 ************************************ 00:06:14.826 START TEST dd_wrong_blocksize 00:06:14.826 ************************************ 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:14.826 [2024-11-15 12:41:23.365618] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.826 00:06:14.826 real 0m0.076s 00:06:14.826 user 0m0.048s 00:06:14.826 sys 0m0.027s 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:14.826 ************************************ 00:06:14.826 END TEST dd_wrong_blocksize 00:06:14.826 ************************************ 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.826 ************************************ 00:06:14.826 START TEST dd_smaller_blocksize 00:06:14.826 ************************************ 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.826 12:41:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:15.085 [2024-11-15 12:41:23.500437] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:15.085 [2024-11-15 12:41:23.500530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61485 ] 00:06:15.085 [2024-11-15 12:41:23.652729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.085 [2024-11-15 12:41:23.694471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.085 [2024-11-15 12:41:23.731019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.344 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:15.624 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:15.624 [2024-11-15 12:41:24.193978] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:15.624 [2024-11-15 12:41:24.194024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.624 [2024-11-15 12:41:24.260773] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:15.930 ************************************ 00:06:15.930 END TEST dd_smaller_blocksize 00:06:15.930 ************************************ 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.930 00:06:15.930 real 0m0.883s 00:06:15.930 user 0m0.337s 00:06:15.930 sys 0m0.439s 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 ************************************ 00:06:15.930 START TEST dd_invalid_count 00:06:15.930 ************************************ 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:15.930 [2024-11-15 12:41:24.414714] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.930 ************************************ 00:06:15.930 END TEST dd_invalid_count 00:06:15.930 ************************************ 00:06:15.930 00:06:15.930 real 0m0.057s 00:06:15.930 user 0m0.033s 00:06:15.930 sys 0m0.023s 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 ************************************ 00:06:15.930 START TEST dd_invalid_oflag 00:06:15.930 ************************************ 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:15.930 [2024-11-15 12:41:24.538349] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.930 ************************************ 00:06:15.930 END TEST dd_invalid_oflag 00:06:15.930 ************************************ 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.930 00:06:15.930 real 0m0.079s 00:06:15.930 user 0m0.050s 00:06:15.930 sys 0m0.027s 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.930 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.195 ************************************ 00:06:16.195 START TEST dd_invalid_iflag 00:06:16.195 ************************************ 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:16.195 [2024-11-15 12:41:24.662193] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:16.195 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.196 ************************************ 00:06:16.196 END TEST dd_invalid_iflag 00:06:16.196 ************************************ 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.196 00:06:16.196 real 0m0.062s 00:06:16.196 user 0m0.045s 00:06:16.196 sys 0m0.017s 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.196 ************************************ 00:06:16.196 START TEST dd_unknown_flag 00:06:16.196 ************************************ 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.196 12:41:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:16.196 [2024-11-15 12:41:24.775286] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:16.196 [2024-11-15 12:41:24.775365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61577 ] 00:06:16.453 [2024-11-15 12:41:24.913527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.453 [2024-11-15 12:41:24.945593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.453 [2024-11-15 12:41:24.977675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.453 [2024-11-15 12:41:24.995969] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:16.453 [2024-11-15 12:41:24.996049] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.453 [2024-11-15 12:41:24.996102] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:16.453 [2024-11-15 12:41:24.996114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.453 [2024-11-15 12:41:24.996358] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:16.453 [2024-11-15 12:41:24.996390] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.453 [2024-11-15 12:41:24.996450] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:16.453 [2024-11-15 12:41:24.996459] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:16.453 [2024-11-15 12:41:25.055616] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.453 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:16.453 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.453 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:16.453 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:16.453 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:16.453 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.453 00:06:16.453 real 0m0.382s 00:06:16.453 user 0m0.195s 00:06:16.453 sys 0m0.098s 00:06:16.453 ************************************ 00:06:16.453 END TEST dd_unknown_flag 00:06:16.453 ************************************ 00:06:16.454 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.454 12:41:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.713 ************************************ 00:06:16.713 START TEST dd_invalid_json 00:06:16.713 ************************************ 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.713 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:16.713 [2024-11-15 12:41:25.219760] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:16.713 [2024-11-15 12:41:25.219862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61600 ] 00:06:16.713 [2024-11-15 12:41:25.367271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.972 [2024-11-15 12:41:25.399387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.972 [2024-11-15 12:41:25.399478] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:16.972 [2024-11-15 12:41:25.399496] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:16.972 [2024-11-15 12:41:25.399504] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.972 [2024-11-15 12:41:25.399539] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.972 00:06:16.972 real 0m0.289s 00:06:16.972 user 0m0.135s 00:06:16.972 sys 0m0.052s 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.972 ************************************ 00:06:16.972 END TEST dd_invalid_json 00:06:16.972 ************************************ 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 ************************************ 00:06:16.972 START TEST dd_invalid_seek 00:06:16.972 ************************************ 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.972 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:16.972 [2024-11-15 12:41:25.566922] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:16.973 [2024-11-15 12:41:25.567019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61635 ] 00:06:16.973 { 00:06:16.973 "subsystems": [ 00:06:16.973 { 00:06:16.973 "subsystem": "bdev", 00:06:16.973 "config": [ 00:06:16.973 { 00:06:16.973 "params": { 00:06:16.973 "block_size": 512, 00:06:16.973 "num_blocks": 512, 00:06:16.973 "name": "malloc0" 00:06:16.973 }, 00:06:16.973 "method": "bdev_malloc_create" 00:06:16.973 }, 00:06:16.973 { 00:06:16.973 "params": { 00:06:16.973 "block_size": 512, 00:06:16.973 "num_blocks": 512, 00:06:16.973 "name": "malloc1" 00:06:16.973 }, 00:06:16.973 "method": "bdev_malloc_create" 00:06:16.973 }, 00:06:16.973 { 00:06:16.973 "method": "bdev_wait_for_examine" 00:06:16.973 } 00:06:16.973 ] 00:06:16.973 } 00:06:16.973 ] 00:06:16.973 } 00:06:17.231 [2024-11-15 12:41:25.709311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.231 [2024-11-15 12:41:25.739974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.231 [2024-11-15 12:41:25.771951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.231 [2024-11-15 12:41:25.815766] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:17.231 [2024-11-15 12:41:25.815855] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.231 [2024-11-15 12:41:25.873131] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.490 00:06:17.490 real 0m0.419s 00:06:17.490 user 0m0.278s 00:06:17.490 sys 0m0.106s 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:17.490 ************************************ 00:06:17.490 END TEST dd_invalid_seek 00:06:17.490 ************************************ 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:17.490 ************************************ 00:06:17.490 START TEST dd_invalid_skip 00:06:17.490 ************************************ 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.490 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.491 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.491 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.491 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.491 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.491 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.491 12:41:25 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:17.491 [2024-11-15 12:41:26.042061] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:17.491 [2024-11-15 12:41:26.042155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61663 ] 00:06:17.491 { 00:06:17.491 "subsystems": [ 00:06:17.491 { 00:06:17.491 "subsystem": "bdev", 00:06:17.491 "config": [ 00:06:17.491 { 00:06:17.491 "params": { 00:06:17.491 "block_size": 512, 00:06:17.491 "num_blocks": 512, 00:06:17.491 "name": "malloc0" 00:06:17.491 }, 00:06:17.491 "method": "bdev_malloc_create" 00:06:17.491 }, 00:06:17.491 { 00:06:17.491 "params": { 00:06:17.491 "block_size": 512, 00:06:17.491 "num_blocks": 512, 00:06:17.491 "name": "malloc1" 00:06:17.491 }, 00:06:17.491 "method": "bdev_malloc_create" 00:06:17.491 }, 00:06:17.491 { 00:06:17.491 "method": "bdev_wait_for_examine" 00:06:17.491 } 00:06:17.491 ] 00:06:17.491 } 00:06:17.491 ] 00:06:17.491 } 00:06:17.749 [2024-11-15 12:41:26.193369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.749 [2024-11-15 12:41:26.233764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.749 [2024-11-15 12:41:26.269836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.749 [2024-11-15 12:41:26.318880] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:17.749 [2024-11-15 12:41:26.318942] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.749 [2024-11-15 12:41:26.395467] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.008 00:06:18.008 real 0m0.481s 00:06:18.008 user 0m0.320s 00:06:18.008 sys 0m0.121s 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.008 ************************************ 00:06:18.008 END TEST dd_invalid_skip 00:06:18.008 ************************************ 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:18.008 ************************************ 00:06:18.008 START TEST dd_invalid_input_count 00:06:18.008 ************************************ 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.008 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.009 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:18.009 [2024-11-15 12:41:26.576037] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:18.009 [2024-11-15 12:41:26.576127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61702 ] 00:06:18.009 { 00:06:18.009 "subsystems": [ 00:06:18.009 { 00:06:18.009 "subsystem": "bdev", 00:06:18.009 "config": [ 00:06:18.009 { 00:06:18.009 "params": { 00:06:18.009 "block_size": 512, 00:06:18.009 "num_blocks": 512, 00:06:18.009 "name": "malloc0" 00:06:18.009 }, 00:06:18.009 "method": "bdev_malloc_create" 00:06:18.009 }, 00:06:18.009 { 00:06:18.009 "params": { 00:06:18.009 "block_size": 512, 00:06:18.009 "num_blocks": 512, 00:06:18.009 "name": "malloc1" 00:06:18.009 }, 00:06:18.009 "method": "bdev_malloc_create" 00:06:18.009 }, 00:06:18.009 { 00:06:18.009 "method": "bdev_wait_for_examine" 00:06:18.009 } 00:06:18.009 ] 00:06:18.009 } 00:06:18.009 ] 00:06:18.009 } 00:06:18.268 [2024-11-15 12:41:26.721547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.268 [2024-11-15 12:41:26.752338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.268 [2024-11-15 12:41:26.784405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.268 [2024-11-15 12:41:26.828279] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:18.268 [2024-11-15 12:41:26.828335] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.268 [2024-11-15 12:41:26.884554] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.268 00:06:18.268 real 0m0.418s 00:06:18.268 user 0m0.270s 00:06:18.268 sys 0m0.114s 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.268 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:18.268 ************************************ 00:06:18.268 END TEST dd_invalid_input_count 00:06:18.268 ************************************ 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:18.527 ************************************ 00:06:18.527 START TEST dd_invalid_output_count 00:06:18.527 ************************************ 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:18.527 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.528 12:41:26 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:18.528 { 00:06:18.528 "subsystems": [ 00:06:18.528 { 00:06:18.528 "subsystem": "bdev", 00:06:18.528 "config": [ 00:06:18.528 { 00:06:18.528 "params": { 00:06:18.528 "block_size": 512, 00:06:18.528 "num_blocks": 512, 00:06:18.528 "name": "malloc0" 00:06:18.528 }, 00:06:18.528 "method": "bdev_malloc_create" 00:06:18.528 }, 00:06:18.528 { 00:06:18.528 "method": "bdev_wait_for_examine" 00:06:18.528 } 00:06:18.528 ] 00:06:18.528 } 00:06:18.528 ] 00:06:18.528 } 00:06:18.528 [2024-11-15 12:41:27.048215] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:18.528 [2024-11-15 12:41:27.048299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61730 ] 00:06:18.528 [2024-11-15 12:41:27.193441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.786 [2024-11-15 12:41:27.221169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.787 [2024-11-15 12:41:27.247472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.787 [2024-11-15 12:41:27.280947] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:18.787 [2024-11-15 12:41:27.281020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.787 [2024-11-15 12:41:27.336833] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.787 00:06:18.787 real 0m0.416s 00:06:18.787 user 0m0.279s 00:06:18.787 sys 0m0.095s 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:18.787 ************************************ 00:06:18.787 END TEST dd_invalid_output_count 00:06:18.787 ************************************ 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:18.787 ************************************ 00:06:18.787 START TEST dd_bs_not_multiple 00:06:18.787 ************************************ 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:18.787 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.046 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:19.046 [2024-11-15 12:41:27.502868] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:19.046 [2024-11-15 12:41:27.502968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61767 ] 00:06:19.046 { 00:06:19.046 "subsystems": [ 00:06:19.046 { 00:06:19.046 "subsystem": "bdev", 00:06:19.046 "config": [ 00:06:19.046 { 00:06:19.046 "params": { 00:06:19.046 "block_size": 512, 00:06:19.046 "num_blocks": 512, 00:06:19.046 "name": "malloc0" 00:06:19.046 }, 00:06:19.046 "method": "bdev_malloc_create" 00:06:19.046 }, 00:06:19.046 { 00:06:19.046 "params": { 00:06:19.046 "block_size": 512, 00:06:19.046 "num_blocks": 512, 00:06:19.046 "name": "malloc1" 00:06:19.046 }, 00:06:19.046 "method": "bdev_malloc_create" 00:06:19.046 }, 00:06:19.046 { 00:06:19.046 "method": "bdev_wait_for_examine" 00:06:19.046 } 00:06:19.046 ] 00:06:19.046 } 00:06:19.046 ] 00:06:19.046 } 00:06:19.046 [2024-11-15 12:41:27.641890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.046 [2024-11-15 12:41:27.669918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.046 [2024-11-15 12:41:27.696614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.306 [2024-11-15 12:41:27.739074] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:19.306 [2024-11-15 12:41:27.739144] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.306 [2024-11-15 12:41:27.794879] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.306 00:06:19.306 real 0m0.389s 00:06:19.306 user 0m0.264s 00:06:19.306 sys 0m0.085s 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 ************************************ 00:06:19.306 END TEST dd_bs_not_multiple 00:06:19.306 ************************************ 00:06:19.306 ************************************ 00:06:19.306 END TEST spdk_dd_negative 00:06:19.306 ************************************ 00:06:19.306 00:06:19.306 real 0m5.403s 00:06:19.306 user 0m2.872s 00:06:19.306 sys 0m1.924s 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.306 12:41:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 00:06:19.306 real 1m1.403s 00:06:19.306 user 0m38.588s 00:06:19.306 sys 0m25.699s 00:06:19.306 12:41:27 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.306 12:41:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 ************************************ 00:06:19.306 END TEST spdk_dd 00:06:19.306 ************************************ 00:06:19.306 12:41:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:19.306 12:41:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:19.306 12:41:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:19.306 12:41:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.306 12:41:27 -- common/autotest_common.sh@10 -- # set +x 00:06:19.566 12:41:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:19.566 12:41:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:19.566 12:41:28 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:19.566 12:41:28 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:19.566 12:41:28 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:19.566 12:41:28 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:19.566 12:41:28 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.566 12:41:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.566 12:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.566 12:41:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.566 ************************************ 00:06:19.566 START TEST nvmf_tcp 00:06:19.566 ************************************ 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.566 * Looking for test storage... 00:06:19.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.566 12:41:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.566 --rc genhtml_branch_coverage=1 00:06:19.566 --rc genhtml_function_coverage=1 00:06:19.566 --rc genhtml_legend=1 00:06:19.566 --rc geninfo_all_blocks=1 00:06:19.566 --rc geninfo_unexecuted_blocks=1 00:06:19.566 00:06:19.566 ' 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.566 --rc genhtml_branch_coverage=1 00:06:19.566 --rc genhtml_function_coverage=1 00:06:19.566 --rc genhtml_legend=1 00:06:19.566 --rc geninfo_all_blocks=1 00:06:19.566 --rc geninfo_unexecuted_blocks=1 00:06:19.566 00:06:19.566 ' 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.566 --rc genhtml_branch_coverage=1 00:06:19.566 --rc genhtml_function_coverage=1 00:06:19.566 --rc genhtml_legend=1 00:06:19.566 --rc geninfo_all_blocks=1 00:06:19.566 --rc geninfo_unexecuted_blocks=1 00:06:19.566 00:06:19.566 ' 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.566 --rc genhtml_branch_coverage=1 00:06:19.566 --rc genhtml_function_coverage=1 00:06:19.566 --rc genhtml_legend=1 00:06:19.566 --rc geninfo_all_blocks=1 00:06:19.566 --rc geninfo_unexecuted_blocks=1 00:06:19.566 00:06:19.566 ' 00:06:19.566 12:41:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:19.566 12:41:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.566 12:41:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.566 12:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.566 ************************************ 00:06:19.566 START TEST nvmf_target_core 00:06:19.566 ************************************ 00:06:19.566 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:19.826 * Looking for test storage... 00:06:19.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.826 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.827 --rc genhtml_branch_coverage=1 00:06:19.827 --rc genhtml_function_coverage=1 00:06:19.827 --rc genhtml_legend=1 00:06:19.827 --rc geninfo_all_blocks=1 00:06:19.827 --rc geninfo_unexecuted_blocks=1 00:06:19.827 00:06:19.827 ' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.827 --rc genhtml_branch_coverage=1 00:06:19.827 --rc genhtml_function_coverage=1 00:06:19.827 --rc genhtml_legend=1 00:06:19.827 --rc geninfo_all_blocks=1 00:06:19.827 --rc geninfo_unexecuted_blocks=1 00:06:19.827 00:06:19.827 ' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.827 --rc genhtml_branch_coverage=1 00:06:19.827 --rc genhtml_function_coverage=1 00:06:19.827 --rc genhtml_legend=1 00:06:19.827 --rc geninfo_all_blocks=1 00:06:19.827 --rc geninfo_unexecuted_blocks=1 00:06:19.827 00:06:19.827 ' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.827 --rc genhtml_branch_coverage=1 00:06:19.827 --rc genhtml_function_coverage=1 00:06:19.827 --rc genhtml_legend=1 00:06:19.827 --rc geninfo_all_blocks=1 00:06:19.827 --rc geninfo_unexecuted_blocks=1 00:06:19.827 00:06:19.827 ' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.827 ************************************ 00:06:19.827 START TEST nvmf_host_management 00:06:19.827 ************************************ 00:06:19.827 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:20.088 * Looking for test storage... 00:06:20.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.088 --rc genhtml_branch_coverage=1 00:06:20.088 --rc genhtml_function_coverage=1 00:06:20.088 --rc genhtml_legend=1 00:06:20.088 --rc geninfo_all_blocks=1 00:06:20.088 --rc geninfo_unexecuted_blocks=1 00:06:20.088 00:06:20.088 ' 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.088 --rc genhtml_branch_coverage=1 00:06:20.088 --rc genhtml_function_coverage=1 00:06:20.088 --rc genhtml_legend=1 00:06:20.088 --rc geninfo_all_blocks=1 00:06:20.088 --rc geninfo_unexecuted_blocks=1 00:06:20.088 00:06:20.088 ' 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.088 --rc genhtml_branch_coverage=1 00:06:20.088 --rc genhtml_function_coverage=1 00:06:20.088 --rc genhtml_legend=1 00:06:20.088 --rc geninfo_all_blocks=1 00:06:20.088 --rc geninfo_unexecuted_blocks=1 00:06:20.088 00:06:20.088 ' 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.088 --rc genhtml_branch_coverage=1 00:06:20.088 --rc genhtml_function_coverage=1 00:06:20.088 --rc genhtml_legend=1 00:06:20.088 --rc geninfo_all_blocks=1 00:06:20.088 --rc geninfo_unexecuted_blocks=1 00:06:20.088 00:06:20.088 ' 00:06:20.088 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:20.089 Cannot find device "nvmf_init_br" 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:20.089 Cannot find device "nvmf_init_br2" 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:20.089 Cannot find device "nvmf_tgt_br" 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:20.089 Cannot find device "nvmf_tgt_br2" 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:20.089 Cannot find device "nvmf_init_br" 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:20.089 Cannot find device "nvmf_init_br2" 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:20.089 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:20.348 Cannot find device "nvmf_tgt_br" 00:06:20.348 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:20.349 Cannot find device "nvmf_tgt_br2" 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:20.349 Cannot find device "nvmf_br" 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:20.349 Cannot find device "nvmf_init_if" 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:20.349 Cannot find device "nvmf_init_if2" 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:20.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:20.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:20.349 12:41:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:20.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:20.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:06:20.608 00:06:20.608 --- 10.0.0.3 ping statistics --- 00:06:20.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.608 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:20.608 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:20.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:20.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:06:20.608 00:06:20.608 --- 10.0.0.4 ping statistics --- 00:06:20.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.608 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:20.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:06:20.609 00:06:20.609 --- 10.0.0.1 ping statistics --- 00:06:20.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.609 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:20.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:06:20.609 00:06:20.609 --- 10.0.0.2 ping statistics --- 00:06:20.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.609 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62108 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62108 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62108 ']' 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.609 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.609 [2024-11-15 12:41:29.265237] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:20.609 [2024-11-15 12:41:29.265515] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.868 [2024-11-15 12:41:29.411804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.868 [2024-11-15 12:41:29.443631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.868 [2024-11-15 12:41:29.443884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.868 [2024-11-15 12:41:29.443967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.868 [2024-11-15 12:41:29.444081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.868 [2024-11-15 12:41:29.444138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.868 [2024-11-15 12:41:29.444864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.868 [2024-11-15 12:41:29.445787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.868 [2024-11-15 12:41:29.445892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:20.868 [2024-11-15 12:41:29.445908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.868 [2024-11-15 12:41:29.485538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.127 [2024-11-15 12:41:29.584673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.127 Malloc0 00:06:21.127 [2024-11-15 12:41:29.652263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62160 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62160 /var/tmp/bdevperf.sock 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62160 ']' 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:21.127 { 00:06:21.127 "params": { 00:06:21.127 "name": "Nvme$subsystem", 00:06:21.127 "trtype": "$TEST_TRANSPORT", 00:06:21.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:21.127 "adrfam": "ipv4", 00:06:21.127 "trsvcid": "$NVMF_PORT", 00:06:21.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:21.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:21.127 "hdgst": ${hdgst:-false}, 00:06:21.127 "ddgst": ${ddgst:-false} 00:06:21.127 }, 00:06:21.127 "method": "bdev_nvme_attach_controller" 00:06:21.127 } 00:06:21.127 EOF 00:06:21.127 )") 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:21.127 12:41:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:21.127 "params": { 00:06:21.127 "name": "Nvme0", 00:06:21.127 "trtype": "tcp", 00:06:21.127 "traddr": "10.0.0.3", 00:06:21.127 "adrfam": "ipv4", 00:06:21.127 "trsvcid": "4420", 00:06:21.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:21.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:21.127 "hdgst": false, 00:06:21.127 "ddgst": false 00:06:21.127 }, 00:06:21.127 "method": "bdev_nvme_attach_controller" 00:06:21.127 }' 00:06:21.127 [2024-11-15 12:41:29.761950] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:21.127 [2024-11-15 12:41:29.762033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62160 ] 00:06:21.386 [2024-11-15 12:41:29.916268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.386 [2024-11-15 12:41:29.954637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.386 [2024-11-15 12:41:29.996302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.644 Running I/O for 10 seconds... 00:06:21.644 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:21.645 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.905 [2024-11-15 12:41:30.569567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.905 [2024-11-15 12:41:30.569649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 12:41:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:21.905 [2024-11-15 12:41:30.569850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.905 [2024-11-15 12:41:30.569890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.905 [2024-11-15 12:41:30.569899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.569950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.569961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.569974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.569985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.569994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.906 [2024-11-15 12:41:30.570812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.906 [2024-11-15 12:41:30.570821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.570984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.570995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.571005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.571025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.571047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.907 [2024-11-15 12:41:30.571068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19752d0 is same with the state(6) to be set 00:06:21.907 [2024-11-15 12:41:30.571246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.907 [2024-11-15 12:41:30.571266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.907 [2024-11-15 12:41:30.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.907 [2024-11-15 12:41:30.571305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.907 [2024-11-15 12:41:30.571324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.907 [2024-11-15 12:41:30.571333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ace0 is same with the state(6) to be set 00:06:22.168 [2024-11-15 12:41:30.572508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:22.168 task offset: 90112 on job bdev=Nvme0n1 fails 00:06:22.168 00:06:22.168 Latency(us) 00:06:22.168 [2024-11-15T12:41:30.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.168 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:22.168 Job: Nvme0n1 ended in about 0.46 seconds with error 00:06:22.168 Verification LBA range: start 0x0 length 0x400 00:06:22.168 Nvme0n1 : 0.46 1514.53 94.66 137.68 0.00 37268.60 2249.08 43134.60 00:06:22.168 [2024-11-15T12:41:30.838Z] =================================================================================================================== 00:06:22.168 [2024-11-15T12:41:30.838Z] Total : 1514.53 94.66 137.68 0.00 37268.60 2249.08 43134.60 00:06:22.168 [2024-11-15 12:41:30.574607] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.168 [2024-11-15 12:41:30.574663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197ace0 (9): Bad file descriptor 00:06:22.168 [2024-11-15 12:41:30.584511] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62160 00:06:23.105 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62160) - No such process 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:23.105 { 00:06:23.105 "params": { 00:06:23.105 "name": "Nvme$subsystem", 00:06:23.105 "trtype": "$TEST_TRANSPORT", 00:06:23.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:23.105 "adrfam": "ipv4", 00:06:23.105 "trsvcid": "$NVMF_PORT", 00:06:23.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:23.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:23.105 "hdgst": ${hdgst:-false}, 00:06:23.105 "ddgst": ${ddgst:-false} 00:06:23.105 }, 00:06:23.105 "method": "bdev_nvme_attach_controller" 00:06:23.105 } 00:06:23.105 EOF 00:06:23.105 )") 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:23.105 12:41:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:23.105 "params": { 00:06:23.105 "name": "Nvme0", 00:06:23.105 "trtype": "tcp", 00:06:23.105 "traddr": "10.0.0.3", 00:06:23.105 "adrfam": "ipv4", 00:06:23.105 "trsvcid": "4420", 00:06:23.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:23.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:23.105 "hdgst": false, 00:06:23.105 "ddgst": false 00:06:23.105 }, 00:06:23.105 "method": "bdev_nvme_attach_controller" 00:06:23.105 }' 00:06:23.105 [2024-11-15 12:41:31.635065] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:23.106 [2024-11-15 12:41:31.635793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ] 00:06:23.364 [2024-11-15 12:41:31.783577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.364 [2024-11-15 12:41:31.813117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.364 [2024-11-15 12:41:31.849273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.364 Running I/O for 1 seconds... 00:06:24.302 1664.00 IOPS, 104.00 MiB/s 00:06:24.302 Latency(us) 00:06:24.302 [2024-11-15T12:41:32.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.302 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:24.302 Verification LBA range: start 0x0 length 0x400 00:06:24.302 Nvme0n1 : 1.01 1711.67 106.98 0.00 0.00 36700.09 3470.43 33363.78 00:06:24.302 [2024-11-15T12:41:32.972Z] =================================================================================================================== 00:06:24.302 [2024-11-15T12:41:32.972Z] Total : 1711.67 106.98 0.00 0.00 36700.09 3470.43 33363.78 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:24.561 rmmod nvme_tcp 00:06:24.561 rmmod nvme_fabrics 00:06:24.561 rmmod nvme_keyring 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62108 ']' 00:06:24.561 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62108 00:06:24.562 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62108 ']' 00:06:24.562 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62108 00:06:24.562 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:24.562 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.562 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62108 00:06:24.821 killing process with pid 62108 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62108' 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62108 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62108 00:06:24.821 [2024-11-15 12:41:33.356791] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:24.821 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:25.080 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:25.080 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:25.080 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:25.080 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:25.080 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:25.080 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.080 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:25.081 ************************************ 00:06:25.081 END TEST nvmf_host_management 00:06:25.081 ************************************ 00:06:25.081 00:06:25.081 real 0m5.170s 00:06:25.081 user 0m18.001s 00:06:25.081 sys 0m1.375s 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.081 ************************************ 00:06:25.081 START TEST nvmf_lvol 00:06:25.081 ************************************ 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:25.081 * Looking for test storage... 00:06:25.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.081 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.341 --rc genhtml_branch_coverage=1 00:06:25.341 --rc genhtml_function_coverage=1 00:06:25.341 --rc genhtml_legend=1 00:06:25.341 --rc geninfo_all_blocks=1 00:06:25.341 --rc geninfo_unexecuted_blocks=1 00:06:25.341 00:06:25.341 ' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.341 --rc genhtml_branch_coverage=1 00:06:25.341 --rc genhtml_function_coverage=1 00:06:25.341 --rc genhtml_legend=1 00:06:25.341 --rc geninfo_all_blocks=1 00:06:25.341 --rc geninfo_unexecuted_blocks=1 00:06:25.341 00:06:25.341 ' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.341 --rc genhtml_branch_coverage=1 00:06:25.341 --rc genhtml_function_coverage=1 00:06:25.341 --rc genhtml_legend=1 00:06:25.341 --rc geninfo_all_blocks=1 00:06:25.341 --rc geninfo_unexecuted_blocks=1 00:06:25.341 00:06:25.341 ' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.341 --rc genhtml_branch_coverage=1 00:06:25.341 --rc genhtml_function_coverage=1 00:06:25.341 --rc genhtml_legend=1 00:06:25.341 --rc geninfo_all_blocks=1 00:06:25.341 --rc geninfo_unexecuted_blocks=1 00:06:25.341 00:06:25.341 ' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.341 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:25.342 Cannot find device "nvmf_init_br" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:25.342 Cannot find device "nvmf_init_br2" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:25.342 Cannot find device "nvmf_tgt_br" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:25.342 Cannot find device "nvmf_tgt_br2" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:25.342 Cannot find device "nvmf_init_br" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:25.342 Cannot find device "nvmf_init_br2" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:25.342 Cannot find device "nvmf_tgt_br" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:25.342 Cannot find device "nvmf_tgt_br2" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:25.342 Cannot find device "nvmf_br" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:25.342 Cannot find device "nvmf_init_if" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:25.342 Cannot find device "nvmf_init_if2" 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:25.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:25.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:25.342 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:25.342 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:25.342 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:25.601 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:25.601 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:25.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:25.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:06:25.602 00:06:25.602 --- 10.0.0.3 ping statistics --- 00:06:25.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.602 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:25.602 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:25.602 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:06:25.602 00:06:25.602 --- 10.0.0.4 ping statistics --- 00:06:25.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.602 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:25.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:25.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:06:25.602 00:06:25.602 --- 10.0.0.1 ping statistics --- 00:06:25.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.602 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:25.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:25.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:06:25.602 00:06:25.602 --- 10.0.0.2 ping statistics --- 00:06:25.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.602 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62465 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62465 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62465 ']' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.602 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.602 [2024-11-15 12:41:34.238797] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:25.602 [2024-11-15 12:41:34.238887] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.861 [2024-11-15 12:41:34.386275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.861 [2024-11-15 12:41:34.425338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.861 [2024-11-15 12:41:34.425412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.861 [2024-11-15 12:41:34.425429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.861 [2024-11-15 12:41:34.425438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.861 [2024-11-15 12:41:34.425447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.861 [2024-11-15 12:41:34.426369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.861 [2024-11-15 12:41:34.426478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.861 [2024-11-15 12:41:34.426488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.861 [2024-11-15 12:41:34.461969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.861 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.861 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:25.861 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:25.861 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.861 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.120 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.120 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:26.120 [2024-11-15 12:41:34.772302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.379 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.638 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:26.638 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.897 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:26.898 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:27.156 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:27.416 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e8b897fa-9f5d-4f8b-a8f8-0790168187de 00:06:27.416 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e8b897fa-9f5d-4f8b-a8f8-0790168187de lvol 20 00:06:27.675 12:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8b6b5e06-647f-4898-8897-35ffa792e17b 00:06:27.675 12:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:27.675 12:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b6b5e06-647f-4898-8897-35ffa792e17b 00:06:27.934 12:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:28.193 [2024-11-15 12:41:36.806036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:28.193 12:41:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:28.452 12:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62533 00:06:28.452 12:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:28.452 12:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:29.388 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8b6b5e06-647f-4898-8897-35ffa792e17b MY_SNAPSHOT 00:06:29.956 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=67d341e2-d1cb-454a-a3a4-30080b0f643e 00:06:29.956 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8b6b5e06-647f-4898-8897-35ffa792e17b 30 00:06:30.214 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 67d341e2-d1cb-454a-a3a4-30080b0f643e MY_CLONE 00:06:30.473 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fb2df0d4-9586-489c-b136-0caf124b707e 00:06:30.473 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate fb2df0d4-9586-489c-b136-0caf124b707e 00:06:31.040 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62533 00:06:39.156 Initializing NVMe Controllers 00:06:39.156 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:39.156 Controller IO queue size 128, less than required. 00:06:39.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.156 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:39.156 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:39.156 Initialization complete. Launching workers. 00:06:39.156 ======================================================== 00:06:39.156 Latency(us) 00:06:39.156 Device Information : IOPS MiB/s Average min max 00:06:39.156 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12034.64 47.01 10644.03 1536.84 85374.30 00:06:39.156 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11984.55 46.81 10685.13 2650.14 43123.24 00:06:39.156 ======================================================== 00:06:39.156 Total : 24019.19 93.82 10664.54 1536.84 85374.30 00:06:39.156 00:06:39.156 12:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:39.156 12:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8b6b5e06-647f-4898-8897-35ffa792e17b 00:06:39.156 12:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8b897fa-9f5d-4f8b-a8f8-0790168187de 00:06:39.414 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:39.414 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:39.414 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:39.414 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.414 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:39.674 rmmod nvme_tcp 00:06:39.674 rmmod nvme_fabrics 00:06:39.674 rmmod nvme_keyring 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62465 ']' 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62465 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62465 ']' 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62465 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62465 00:06:39.674 killing process with pid 62465 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62465' 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62465 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62465 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:39.674 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:06:39.934 00:06:39.934 real 0m14.876s 00:06:39.934 user 1m2.372s 00:06:39.934 sys 0m4.046s 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:39.934 ************************************ 00:06:39.934 END TEST nvmf_lvol 00:06:39.934 ************************************ 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:39.934 ************************************ 00:06:39.934 START TEST nvmf_lvs_grow 00:06:39.934 ************************************ 00:06:39.934 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:40.194 * Looking for test storage... 00:06:40.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:40.194 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.194 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.194 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.195 --rc genhtml_branch_coverage=1 00:06:40.195 --rc genhtml_function_coverage=1 00:06:40.195 --rc genhtml_legend=1 00:06:40.195 --rc geninfo_all_blocks=1 00:06:40.195 --rc geninfo_unexecuted_blocks=1 00:06:40.195 00:06:40.195 ' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.195 --rc genhtml_branch_coverage=1 00:06:40.195 --rc genhtml_function_coverage=1 00:06:40.195 --rc genhtml_legend=1 00:06:40.195 --rc geninfo_all_blocks=1 00:06:40.195 --rc geninfo_unexecuted_blocks=1 00:06:40.195 00:06:40.195 ' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.195 --rc genhtml_branch_coverage=1 00:06:40.195 --rc genhtml_function_coverage=1 00:06:40.195 --rc genhtml_legend=1 00:06:40.195 --rc geninfo_all_blocks=1 00:06:40.195 --rc geninfo_unexecuted_blocks=1 00:06:40.195 00:06:40.195 ' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.195 --rc genhtml_branch_coverage=1 00:06:40.195 --rc genhtml_function_coverage=1 00:06:40.195 --rc genhtml_legend=1 00:06:40.195 --rc geninfo_all_blocks=1 00:06:40.195 --rc geninfo_unexecuted_blocks=1 00:06:40.195 00:06:40.195 ' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.195 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.195 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:40.196 Cannot find device "nvmf_init_br" 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:40.196 Cannot find device "nvmf_init_br2" 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:40.196 Cannot find device "nvmf_tgt_br" 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:40.196 Cannot find device "nvmf_tgt_br2" 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:40.196 Cannot find device "nvmf_init_br" 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:40.196 Cannot find device "nvmf_init_br2" 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:40.196 Cannot find device "nvmf_tgt_br" 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:06:40.196 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:40.454 Cannot find device "nvmf_tgt_br2" 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:40.454 Cannot find device "nvmf_br" 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:40.454 Cannot find device "nvmf_init_if" 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:40.454 Cannot find device "nvmf_init_if2" 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:40.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:40.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:40.454 12:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:40.454 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:40.455 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:40.455 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:40.455 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:40.455 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:40.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:40.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:06:40.713 00:06:40.713 --- 10.0.0.3 ping statistics --- 00:06:40.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.713 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:40.713 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:40.713 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:06:40.713 00:06:40.713 --- 10.0.0.4 ping statistics --- 00:06:40.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.713 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:40.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:40.713 00:06:40.713 --- 10.0.0.1 ping statistics --- 00:06:40.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.713 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:40.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:06:40.713 00:06:40.713 --- 10.0.0.2 ping statistics --- 00:06:40.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.713 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=62904 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 62904 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 62904 ']' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.713 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.713 [2024-11-15 12:41:49.265872] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:40.714 [2024-11-15 12:41:49.265965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.973 [2024-11-15 12:41:49.412915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.973 [2024-11-15 12:41:49.439672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.973 [2024-11-15 12:41:49.439722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.973 [2024-11-15 12:41:49.439747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.973 [2024-11-15 12:41:49.439764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.973 [2024-11-15 12:41:49.439770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.973 [2024-11-15 12:41:49.440087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.973 [2024-11-15 12:41:49.466709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.973 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.973 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:40.973 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:40.973 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.973 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.973 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.973 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:41.232 [2024-11-15 12:41:49.865472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:41.232 ************************************ 00:06:41.232 START TEST lvs_grow_clean 00:06:41.232 ************************************ 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:41.232 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:41.490 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:41.490 12:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:41.749 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:41.749 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:42.007 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e002831-b500-476c-ba19-6d4955a9ef08 00:06:42.007 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:42.007 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:42.266 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:42.266 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:42.266 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0e002831-b500-476c-ba19-6d4955a9ef08 lvol 150 00:06:42.525 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=865c52df-b218-4257-9751-83aa03e63ad4 00:06:42.525 12:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:42.525 12:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:42.783 [2024-11-15 12:41:51.264611] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:42.783 [2024-11-15 12:41:51.264683] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:42.783 true 00:06:42.783 12:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:42.783 12:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:43.042 12:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:43.042 12:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.300 12:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 865c52df-b218-4257-9751-83aa03e63ad4 00:06:43.558 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:43.558 [2024-11-15 12:41:52.205072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:43.558 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62979 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62979 /var/tmp/bdevperf.sock 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 62979 ']' 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.126 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:44.126 [2024-11-15 12:41:52.527487] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:06:44.127 [2024-11-15 12:41:52.527571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62979 ] 00:06:44.127 [2024-11-15 12:41:52.664334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.127 [2024-11-15 12:41:52.694536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.127 [2024-11-15 12:41:52.720683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.127 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.127 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:44.127 12:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:44.694 Nvme0n1 00:06:44.694 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:44.953 [ 00:06:44.953 { 00:06:44.953 "name": "Nvme0n1", 00:06:44.953 "aliases": [ 00:06:44.953 "865c52df-b218-4257-9751-83aa03e63ad4" 00:06:44.953 ], 00:06:44.953 "product_name": "NVMe disk", 00:06:44.953 "block_size": 4096, 00:06:44.953 "num_blocks": 38912, 00:06:44.953 "uuid": "865c52df-b218-4257-9751-83aa03e63ad4", 00:06:44.953 "numa_id": -1, 00:06:44.953 "assigned_rate_limits": { 00:06:44.953 "rw_ios_per_sec": 0, 00:06:44.953 "rw_mbytes_per_sec": 0, 00:06:44.953 "r_mbytes_per_sec": 0, 00:06:44.953 "w_mbytes_per_sec": 0 00:06:44.953 }, 00:06:44.953 "claimed": false, 00:06:44.953 "zoned": false, 00:06:44.953 "supported_io_types": { 00:06:44.953 "read": true, 00:06:44.953 "write": true, 00:06:44.953 "unmap": true, 00:06:44.953 "flush": true, 00:06:44.953 "reset": true, 00:06:44.953 "nvme_admin": true, 00:06:44.953 "nvme_io": true, 00:06:44.953 "nvme_io_md": false, 00:06:44.953 "write_zeroes": true, 00:06:44.953 "zcopy": false, 00:06:44.953 "get_zone_info": false, 00:06:44.953 "zone_management": false, 00:06:44.953 "zone_append": false, 00:06:44.953 "compare": true, 00:06:44.953 "compare_and_write": true, 00:06:44.953 "abort": true, 00:06:44.953 "seek_hole": false, 00:06:44.953 "seek_data": false, 00:06:44.953 "copy": true, 00:06:44.953 "nvme_iov_md": false 00:06:44.953 }, 00:06:44.953 "memory_domains": [ 00:06:44.953 { 00:06:44.953 "dma_device_id": "system", 00:06:44.953 "dma_device_type": 1 00:06:44.953 } 00:06:44.953 ], 00:06:44.953 "driver_specific": { 00:06:44.953 "nvme": [ 00:06:44.953 { 00:06:44.953 "trid": { 00:06:44.953 "trtype": "TCP", 00:06:44.953 "adrfam": "IPv4", 00:06:44.953 "traddr": "10.0.0.3", 00:06:44.953 "trsvcid": "4420", 00:06:44.953 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:44.953 }, 00:06:44.953 "ctrlr_data": { 00:06:44.953 "cntlid": 1, 00:06:44.953 "vendor_id": "0x8086", 00:06:44.953 "model_number": "SPDK bdev Controller", 00:06:44.953 "serial_number": "SPDK0", 00:06:44.953 "firmware_revision": "25.01", 00:06:44.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:44.953 "oacs": { 00:06:44.953 "security": 0, 00:06:44.953 "format": 0, 00:06:44.953 "firmware": 0, 00:06:44.953 "ns_manage": 0 00:06:44.953 }, 00:06:44.953 "multi_ctrlr": true, 00:06:44.953 "ana_reporting": false 00:06:44.953 }, 00:06:44.953 "vs": { 00:06:44.953 "nvme_version": "1.3" 00:06:44.953 }, 00:06:44.953 "ns_data": { 00:06:44.953 "id": 1, 00:06:44.953 "can_share": true 00:06:44.953 } 00:06:44.953 } 00:06:44.953 ], 00:06:44.953 "mp_policy": "active_passive" 00:06:44.953 } 00:06:44.953 } 00:06:44.953 ] 00:06:44.953 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:44.953 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62995 00:06:44.953 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:44.953 Running I/O for 10 seconds... 00:06:45.914 Latency(us) 00:06:45.914 [2024-11-15T12:41:54.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.914 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:06:45.914 [2024-11-15T12:41:54.584Z] =================================================================================================================== 00:06:45.914 [2024-11-15T12:41:54.584Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:06:45.914 00:06:46.848 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:46.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.848 Nvme0n1 : 2.00 6590.50 25.74 0.00 0.00 0.00 0.00 0.00 00:06:46.848 [2024-11-15T12:41:55.518Z] =================================================================================================================== 00:06:46.848 [2024-11-15T12:41:55.518Z] Total : 6590.50 25.74 0.00 0.00 0.00 0.00 0.00 00:06:46.848 00:06:47.108 true 00:06:47.108 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:47.108 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:47.675 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:47.675 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:47.675 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 62995 00:06:47.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.934 Nvme0n1 : 3.00 6637.33 25.93 0.00 0.00 0.00 0.00 0.00 00:06:47.934 [2024-11-15T12:41:56.604Z] =================================================================================================================== 00:06:47.934 [2024-11-15T12:41:56.604Z] Total : 6637.33 25.93 0.00 0.00 0.00 0.00 0.00 00:06:47.934 00:06:48.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.871 Nvme0n1 : 4.00 6600.75 25.78 0.00 0.00 0.00 0.00 0.00 00:06:48.871 [2024-11-15T12:41:57.541Z] =================================================================================================================== 00:06:48.871 [2024-11-15T12:41:57.541Z] Total : 6600.75 25.78 0.00 0.00 0.00 0.00 0.00 00:06:48.871 00:06:50.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.248 Nvme0n1 : 5.00 6576.00 25.69 0.00 0.00 0.00 0.00 0.00 00:06:50.248 [2024-11-15T12:41:58.918Z] =================================================================================================================== 00:06:50.248 [2024-11-15T12:41:58.918Z] Total : 6576.00 25.69 0.00 0.00 0.00 0.00 0.00 00:06:50.248 00:06:51.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.186 Nvme0n1 : 6.00 6559.50 25.62 0.00 0.00 0.00 0.00 0.00 00:06:51.186 [2024-11-15T12:41:59.856Z] =================================================================================================================== 00:06:51.186 [2024-11-15T12:41:59.856Z] Total : 6559.50 25.62 0.00 0.00 0.00 0.00 0.00 00:06:51.186 00:06:52.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.121 Nvme0n1 : 7.00 6547.71 25.58 0.00 0.00 0.00 0.00 0.00 00:06:52.121 [2024-11-15T12:42:00.791Z] =================================================================================================================== 00:06:52.121 [2024-11-15T12:42:00.791Z] Total : 6547.71 25.58 0.00 0.00 0.00 0.00 0.00 00:06:52.121 00:06:53.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.057 Nvme0n1 : 8.00 6523.00 25.48 0.00 0.00 0.00 0.00 0.00 00:06:53.057 [2024-11-15T12:42:01.727Z] =================================================================================================================== 00:06:53.057 [2024-11-15T12:42:01.727Z] Total : 6523.00 25.48 0.00 0.00 0.00 0.00 0.00 00:06:53.057 00:06:53.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.994 Nvme0n1 : 9.00 6489.67 25.35 0.00 0.00 0.00 0.00 0.00 00:06:53.994 [2024-11-15T12:42:02.664Z] =================================================================================================================== 00:06:53.994 [2024-11-15T12:42:02.664Z] Total : 6489.67 25.35 0.00 0.00 0.00 0.00 0.00 00:06:53.994 00:06:54.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.931 Nvme0n1 : 10.00 6463.00 25.25 0.00 0.00 0.00 0.00 0.00 00:06:54.931 [2024-11-15T12:42:03.601Z] =================================================================================================================== 00:06:54.931 [2024-11-15T12:42:03.601Z] Total : 6463.00 25.25 0.00 0.00 0.00 0.00 0.00 00:06:54.931 00:06:54.931 00:06:54.931 Latency(us) 00:06:54.931 [2024-11-15T12:42:03.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.931 Nvme0n1 : 10.01 6471.46 25.28 0.00 0.00 19775.10 5868.45 51475.55 00:06:54.931 [2024-11-15T12:42:03.601Z] =================================================================================================================== 00:06:54.931 [2024-11-15T12:42:03.601Z] Total : 6471.46 25.28 0.00 0.00 19775.10 5868.45 51475.55 00:06:54.931 { 00:06:54.931 "results": [ 00:06:54.931 { 00:06:54.931 "job": "Nvme0n1", 00:06:54.931 "core_mask": "0x2", 00:06:54.931 "workload": "randwrite", 00:06:54.931 "status": "finished", 00:06:54.931 "queue_depth": 128, 00:06:54.931 "io_size": 4096, 00:06:54.931 "runtime": 10.0067, 00:06:54.931 "iops": 6471.464119040243, 00:06:54.931 "mibps": 25.279156715000948, 00:06:54.931 "io_failed": 0, 00:06:54.931 "io_timeout": 0, 00:06:54.931 "avg_latency_us": 19775.10329669342, 00:06:54.931 "min_latency_us": 5868.450909090909, 00:06:54.931 "max_latency_us": 51475.54909090909 00:06:54.931 } 00:06:54.931 ], 00:06:54.931 "core_count": 1 00:06:54.931 } 00:06:54.931 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62979 00:06:54.931 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 62979 ']' 00:06:54.931 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 62979 00:06:54.931 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:54.932 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.932 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62979 00:06:54.932 killing process with pid 62979 00:06:54.932 Received shutdown signal, test time was about 10.000000 seconds 00:06:54.932 00:06:54.932 Latency(us) 00:06:54.932 [2024-11-15T12:42:03.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.932 [2024-11-15T12:42:03.602Z] =================================================================================================================== 00:06:54.932 [2024-11-15T12:42:03.602Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:54.932 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.932 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.932 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62979' 00:06:54.932 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 62979 00:06:54.932 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 62979 00:06:55.190 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:55.448 12:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:55.708 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:55.708 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:55.966 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:55.966 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:55.966 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:56.226 [2024-11-15 12:42:04.672313] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:56.226 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:56.484 request: 00:06:56.484 { 00:06:56.484 "uuid": "0e002831-b500-476c-ba19-6d4955a9ef08", 00:06:56.484 "method": "bdev_lvol_get_lvstores", 00:06:56.484 "req_id": 1 00:06:56.484 } 00:06:56.484 Got JSON-RPC error response 00:06:56.484 response: 00:06:56.484 { 00:06:56.484 "code": -19, 00:06:56.484 "message": "No such device" 00:06:56.484 } 00:06:56.484 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:56.484 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.484 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.484 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.484 12:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:56.744 aio_bdev 00:06:56.744 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 865c52df-b218-4257-9751-83aa03e63ad4 00:06:56.744 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=865c52df-b218-4257-9751-83aa03e63ad4 00:06:56.744 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:56.744 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:56.744 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:56.744 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:56.744 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:57.003 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 865c52df-b218-4257-9751-83aa03e63ad4 -t 2000 00:06:57.262 [ 00:06:57.262 { 00:06:57.262 "name": "865c52df-b218-4257-9751-83aa03e63ad4", 00:06:57.262 "aliases": [ 00:06:57.262 "lvs/lvol" 00:06:57.262 ], 00:06:57.262 "product_name": "Logical Volume", 00:06:57.262 "block_size": 4096, 00:06:57.262 "num_blocks": 38912, 00:06:57.262 "uuid": "865c52df-b218-4257-9751-83aa03e63ad4", 00:06:57.262 "assigned_rate_limits": { 00:06:57.262 "rw_ios_per_sec": 0, 00:06:57.262 "rw_mbytes_per_sec": 0, 00:06:57.262 "r_mbytes_per_sec": 0, 00:06:57.262 "w_mbytes_per_sec": 0 00:06:57.262 }, 00:06:57.262 "claimed": false, 00:06:57.262 "zoned": false, 00:06:57.262 "supported_io_types": { 00:06:57.262 "read": true, 00:06:57.262 "write": true, 00:06:57.262 "unmap": true, 00:06:57.262 "flush": false, 00:06:57.262 "reset": true, 00:06:57.262 "nvme_admin": false, 00:06:57.262 "nvme_io": false, 00:06:57.262 "nvme_io_md": false, 00:06:57.262 "write_zeroes": true, 00:06:57.262 "zcopy": false, 00:06:57.262 "get_zone_info": false, 00:06:57.262 "zone_management": false, 00:06:57.262 "zone_append": false, 00:06:57.262 "compare": false, 00:06:57.262 "compare_and_write": false, 00:06:57.262 "abort": false, 00:06:57.262 "seek_hole": true, 00:06:57.262 "seek_data": true, 00:06:57.262 "copy": false, 00:06:57.262 "nvme_iov_md": false 00:06:57.262 }, 00:06:57.262 "driver_specific": { 00:06:57.262 "lvol": { 00:06:57.262 "lvol_store_uuid": "0e002831-b500-476c-ba19-6d4955a9ef08", 00:06:57.262 "base_bdev": "aio_bdev", 00:06:57.262 "thin_provision": false, 00:06:57.262 "num_allocated_clusters": 38, 00:06:57.262 "snapshot": false, 00:06:57.262 "clone": false, 00:06:57.262 "esnap_clone": false 00:06:57.262 } 00:06:57.262 } 00:06:57.262 } 00:06:57.262 ] 00:06:57.262 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:57.262 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:57.262 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:57.520 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:57.520 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:57.520 12:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:57.780 12:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:57.780 12:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 865c52df-b218-4257-9751-83aa03e63ad4 00:06:57.780 12:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e002831-b500-476c-ba19-6d4955a9ef08 00:06:58.348 12:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:58.348 12:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:58.915 ************************************ 00:06:58.915 END TEST lvs_grow_clean 00:06:58.915 ************************************ 00:06:58.915 00:06:58.915 real 0m17.476s 00:06:58.915 user 0m16.472s 00:06:58.915 sys 0m2.361s 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.915 ************************************ 00:06:58.915 START TEST lvs_grow_dirty 00:06:58.915 ************************************ 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:58.915 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:59.173 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:59.173 12:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:59.431 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:06:59.431 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:59.431 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:06:59.690 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:59.690 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:59.690 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 lvol 150 00:06:59.949 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=08c589ef-86d1-4161-89f8-d66814177973 00:06:59.949 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:59.949 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:00.207 [2024-11-15 12:42:08.757260] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:00.207 [2024-11-15 12:42:08.757511] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:00.207 true 00:07:00.207 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:00.207 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:00.466 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:00.466 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:00.724 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08c589ef-86d1-4161-89f8-d66814177973 00:07:00.984 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:01.243 [2024-11-15 12:42:09.801801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:01.243 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:01.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63241 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63241 /var/tmp/bdevperf.sock 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63241 ']' 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.503 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:01.503 [2024-11-15 12:42:10.151366] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:01.503 [2024-11-15 12:42:10.151699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63241 ] 00:07:01.761 [2024-11-15 12:42:10.297691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.761 [2024-11-15 12:42:10.329521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.761 [2024-11-15 12:42:10.357670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.697 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.697 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:02.697 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:02.956 Nvme0n1 00:07:02.956 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:03.215 [ 00:07:03.215 { 00:07:03.215 "name": "Nvme0n1", 00:07:03.215 "aliases": [ 00:07:03.215 "08c589ef-86d1-4161-89f8-d66814177973" 00:07:03.215 ], 00:07:03.215 "product_name": "NVMe disk", 00:07:03.215 "block_size": 4096, 00:07:03.215 "num_blocks": 38912, 00:07:03.215 "uuid": "08c589ef-86d1-4161-89f8-d66814177973", 00:07:03.215 "numa_id": -1, 00:07:03.215 "assigned_rate_limits": { 00:07:03.215 "rw_ios_per_sec": 0, 00:07:03.215 "rw_mbytes_per_sec": 0, 00:07:03.215 "r_mbytes_per_sec": 0, 00:07:03.215 "w_mbytes_per_sec": 0 00:07:03.215 }, 00:07:03.215 "claimed": false, 00:07:03.215 "zoned": false, 00:07:03.215 "supported_io_types": { 00:07:03.215 "read": true, 00:07:03.215 "write": true, 00:07:03.215 "unmap": true, 00:07:03.215 "flush": true, 00:07:03.215 "reset": true, 00:07:03.215 "nvme_admin": true, 00:07:03.215 "nvme_io": true, 00:07:03.215 "nvme_io_md": false, 00:07:03.215 "write_zeroes": true, 00:07:03.215 "zcopy": false, 00:07:03.215 "get_zone_info": false, 00:07:03.215 "zone_management": false, 00:07:03.215 "zone_append": false, 00:07:03.215 "compare": true, 00:07:03.215 "compare_and_write": true, 00:07:03.215 "abort": true, 00:07:03.215 "seek_hole": false, 00:07:03.215 "seek_data": false, 00:07:03.215 "copy": true, 00:07:03.215 "nvme_iov_md": false 00:07:03.215 }, 00:07:03.215 "memory_domains": [ 00:07:03.215 { 00:07:03.215 "dma_device_id": "system", 00:07:03.215 "dma_device_type": 1 00:07:03.215 } 00:07:03.215 ], 00:07:03.215 "driver_specific": { 00:07:03.215 "nvme": [ 00:07:03.215 { 00:07:03.215 "trid": { 00:07:03.215 "trtype": "TCP", 00:07:03.215 "adrfam": "IPv4", 00:07:03.215 "traddr": "10.0.0.3", 00:07:03.215 "trsvcid": "4420", 00:07:03.215 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:03.215 }, 00:07:03.215 "ctrlr_data": { 00:07:03.215 "cntlid": 1, 00:07:03.215 "vendor_id": "0x8086", 00:07:03.215 "model_number": "SPDK bdev Controller", 00:07:03.215 "serial_number": "SPDK0", 00:07:03.215 "firmware_revision": "25.01", 00:07:03.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:03.215 "oacs": { 00:07:03.215 "security": 0, 00:07:03.215 "format": 0, 00:07:03.215 "firmware": 0, 00:07:03.215 "ns_manage": 0 00:07:03.215 }, 00:07:03.215 "multi_ctrlr": true, 00:07:03.215 "ana_reporting": false 00:07:03.215 }, 00:07:03.215 "vs": { 00:07:03.215 "nvme_version": "1.3" 00:07:03.215 }, 00:07:03.215 "ns_data": { 00:07:03.215 "id": 1, 00:07:03.215 "can_share": true 00:07:03.215 } 00:07:03.215 } 00:07:03.215 ], 00:07:03.215 "mp_policy": "active_passive" 00:07:03.215 } 00:07:03.215 } 00:07:03.215 ] 00:07:03.215 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63265 00:07:03.215 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:03.215 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:03.215 Running I/O for 10 seconds... 00:07:04.152 Latency(us) 00:07:04.152 [2024-11-15T12:42:12.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.152 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:04.152 [2024-11-15T12:42:12.822Z] =================================================================================================================== 00:07:04.152 [2024-11-15T12:42:12.822Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:04.152 00:07:05.095 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:05.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.355 Nvme0n1 : 2.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:05.355 [2024-11-15T12:42:14.025Z] =================================================================================================================== 00:07:05.355 [2024-11-15T12:42:14.025Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:05.355 00:07:05.613 true 00:07:05.613 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:05.613 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:05.872 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:05.872 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:05.872 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63265 00:07:06.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.439 Nvme0n1 : 3.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:07:06.439 [2024-11-15T12:42:15.109Z] =================================================================================================================== 00:07:06.439 [2024-11-15T12:42:15.109Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:07:06.439 00:07:07.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.375 Nvme0n1 : 4.00 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:07:07.375 [2024-11-15T12:42:16.045Z] =================================================================================================================== 00:07:07.375 [2024-11-15T12:42:16.045Z] Total : 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:07:07.375 00:07:08.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.312 Nvme0n1 : 5.00 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:07:08.312 [2024-11-15T12:42:16.982Z] =================================================================================================================== 00:07:08.312 [2024-11-15T12:42:16.982Z] Total : 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:07:08.312 00:07:09.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.248 Nvme0n1 : 6.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:07:09.248 [2024-11-15T12:42:17.918Z] =================================================================================================================== 00:07:09.248 [2024-11-15T12:42:17.918Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:07:09.248 00:07:10.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.184 Nvme0n1 : 7.00 6513.29 25.44 0.00 0.00 0.00 0.00 0.00 00:07:10.184 [2024-11-15T12:42:18.854Z] =================================================================================================================== 00:07:10.184 [2024-11-15T12:42:18.854Z] Total : 6513.29 25.44 0.00 0.00 0.00 0.00 0.00 00:07:10.184 00:07:11.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.562 Nvme0n1 : 8.00 6492.88 25.36 0.00 0.00 0.00 0.00 0.00 00:07:11.562 [2024-11-15T12:42:20.232Z] =================================================================================================================== 00:07:11.562 [2024-11-15T12:42:20.232Z] Total : 6492.88 25.36 0.00 0.00 0.00 0.00 0.00 00:07:11.562 00:07:12.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.498 Nvme0n1 : 9.00 6448.78 25.19 0.00 0.00 0.00 0.00 0.00 00:07:12.498 [2024-11-15T12:42:21.168Z] =================================================================================================================== 00:07:12.498 [2024-11-15T12:42:21.168Z] Total : 6448.78 25.19 0.00 0.00 0.00 0.00 0.00 00:07:12.498 00:07:13.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.435 Nvme0n1 : 10.00 6438.90 25.15 0.00 0.00 0.00 0.00 0.00 00:07:13.435 [2024-11-15T12:42:22.105Z] =================================================================================================================== 00:07:13.435 [2024-11-15T12:42:22.105Z] Total : 6438.90 25.15 0.00 0.00 0.00 0.00 0.00 00:07:13.435 00:07:13.435 00:07:13.435 Latency(us) 00:07:13.435 [2024-11-15T12:42:22.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.435 Nvme0n1 : 10.01 6445.29 25.18 0.00 0.00 19853.87 16205.27 99614.72 00:07:13.435 [2024-11-15T12:42:22.105Z] =================================================================================================================== 00:07:13.435 [2024-11-15T12:42:22.105Z] Total : 6445.29 25.18 0.00 0.00 19853.87 16205.27 99614.72 00:07:13.435 { 00:07:13.435 "results": [ 00:07:13.435 { 00:07:13.435 "job": "Nvme0n1", 00:07:13.435 "core_mask": "0x2", 00:07:13.435 "workload": "randwrite", 00:07:13.435 "status": "finished", 00:07:13.435 "queue_depth": 128, 00:07:13.435 "io_size": 4096, 00:07:13.435 "runtime": 10.009948, 00:07:13.435 "iops": 6445.288227271511, 00:07:13.435 "mibps": 25.176907137779338, 00:07:13.435 "io_failed": 0, 00:07:13.435 "io_timeout": 0, 00:07:13.435 "avg_latency_us": 19853.87479043578, 00:07:13.435 "min_latency_us": 16205.265454545455, 00:07:13.435 "max_latency_us": 99614.72 00:07:13.435 } 00:07:13.435 ], 00:07:13.435 "core_count": 1 00:07:13.435 } 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63241 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63241 ']' 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63241 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63241 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63241' 00:07:13.435 killing process with pid 63241 00:07:13.435 Received shutdown signal, test time was about 10.000000 seconds 00:07:13.435 00:07:13.435 Latency(us) 00:07:13.435 [2024-11-15T12:42:22.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.435 [2024-11-15T12:42:22.105Z] =================================================================================================================== 00:07:13.435 [2024-11-15T12:42:22.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63241 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63241 00:07:13.435 12:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:13.694 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:13.954 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:13.954 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:14.212 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62904 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62904 00:07:14.213 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62904 Killed "${NVMF_APP[@]}" "$@" 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:14.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63403 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63403 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63403 ']' 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.213 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:14.213 [2024-11-15 12:42:22.844980] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:14.213 [2024-11-15 12:42:22.845234] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.471 [2024-11-15 12:42:22.985262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.471 [2024-11-15 12:42:23.011178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.471 [2024-11-15 12:42:23.011449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.471 [2024-11-15 12:42:23.011681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.471 [2024-11-15 12:42:23.011950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.471 [2024-11-15 12:42:23.011979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.471 [2024-11-15 12:42:23.012284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.471 [2024-11-15 12:42:23.038492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.464 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.464 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:15.464 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.464 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.464 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.464 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:15.464 [2024-11-15 12:42:24.093565] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:15.464 [2024-11-15 12:42:24.094081] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:15.464 [2024-11-15 12:42:24.094355] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 08c589ef-86d1-4161-89f8-d66814177973 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=08c589ef-86d1-4161-89f8-d66814177973 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.722 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:15.981 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08c589ef-86d1-4161-89f8-d66814177973 -t 2000 00:07:16.240 [ 00:07:16.240 { 00:07:16.240 "name": "08c589ef-86d1-4161-89f8-d66814177973", 00:07:16.240 "aliases": [ 00:07:16.240 "lvs/lvol" 00:07:16.240 ], 00:07:16.240 "product_name": "Logical Volume", 00:07:16.240 "block_size": 4096, 00:07:16.240 "num_blocks": 38912, 00:07:16.240 "uuid": "08c589ef-86d1-4161-89f8-d66814177973", 00:07:16.240 "assigned_rate_limits": { 00:07:16.240 "rw_ios_per_sec": 0, 00:07:16.240 "rw_mbytes_per_sec": 0, 00:07:16.240 "r_mbytes_per_sec": 0, 00:07:16.240 "w_mbytes_per_sec": 0 00:07:16.240 }, 00:07:16.240 "claimed": false, 00:07:16.240 "zoned": false, 00:07:16.240 "supported_io_types": { 00:07:16.240 "read": true, 00:07:16.240 "write": true, 00:07:16.240 "unmap": true, 00:07:16.240 "flush": false, 00:07:16.240 "reset": true, 00:07:16.240 "nvme_admin": false, 00:07:16.240 "nvme_io": false, 00:07:16.240 "nvme_io_md": false, 00:07:16.240 "write_zeroes": true, 00:07:16.240 "zcopy": false, 00:07:16.240 "get_zone_info": false, 00:07:16.240 "zone_management": false, 00:07:16.240 "zone_append": false, 00:07:16.240 "compare": false, 00:07:16.240 "compare_and_write": false, 00:07:16.240 "abort": false, 00:07:16.240 "seek_hole": true, 00:07:16.240 "seek_data": true, 00:07:16.240 "copy": false, 00:07:16.240 "nvme_iov_md": false 00:07:16.240 }, 00:07:16.240 "driver_specific": { 00:07:16.240 "lvol": { 00:07:16.240 "lvol_store_uuid": "7bc2c139-559b-4e86-bb2f-9d44a9964be6", 00:07:16.240 "base_bdev": "aio_bdev", 00:07:16.240 "thin_provision": false, 00:07:16.240 "num_allocated_clusters": 38, 00:07:16.240 "snapshot": false, 00:07:16.240 "clone": false, 00:07:16.240 "esnap_clone": false 00:07:16.240 } 00:07:16.240 } 00:07:16.240 } 00:07:16.240 ] 00:07:16.240 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:16.240 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:16.240 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:16.498 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:16.498 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:16.498 12:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:16.757 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:16.757 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:17.017 [2024-11-15 12:42:25.427696] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:17.017 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:17.275 request: 00:07:17.275 { 00:07:17.275 "uuid": "7bc2c139-559b-4e86-bb2f-9d44a9964be6", 00:07:17.275 "method": "bdev_lvol_get_lvstores", 00:07:17.275 "req_id": 1 00:07:17.275 } 00:07:17.275 Got JSON-RPC error response 00:07:17.275 response: 00:07:17.275 { 00:07:17.275 "code": -19, 00:07:17.275 "message": "No such device" 00:07:17.275 } 00:07:17.275 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:17.275 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.275 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.275 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.275 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.275 aio_bdev 00:07:17.534 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 08c589ef-86d1-4161-89f8-d66814177973 00:07:17.534 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=08c589ef-86d1-4161-89f8-d66814177973 00:07:17.534 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.534 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:17.534 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.534 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.534 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:17.793 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08c589ef-86d1-4161-89f8-d66814177973 -t 2000 00:07:17.793 [ 00:07:17.793 { 00:07:17.793 "name": "08c589ef-86d1-4161-89f8-d66814177973", 00:07:17.793 "aliases": [ 00:07:17.793 "lvs/lvol" 00:07:17.793 ], 00:07:17.793 "product_name": "Logical Volume", 00:07:17.793 "block_size": 4096, 00:07:17.793 "num_blocks": 38912, 00:07:17.793 "uuid": "08c589ef-86d1-4161-89f8-d66814177973", 00:07:17.793 "assigned_rate_limits": { 00:07:17.793 "rw_ios_per_sec": 0, 00:07:17.793 "rw_mbytes_per_sec": 0, 00:07:17.793 "r_mbytes_per_sec": 0, 00:07:17.793 "w_mbytes_per_sec": 0 00:07:17.793 }, 00:07:17.793 "claimed": false, 00:07:17.793 "zoned": false, 00:07:17.793 "supported_io_types": { 00:07:17.793 "read": true, 00:07:17.793 "write": true, 00:07:17.793 "unmap": true, 00:07:17.793 "flush": false, 00:07:17.793 "reset": true, 00:07:17.793 "nvme_admin": false, 00:07:17.793 "nvme_io": false, 00:07:17.793 "nvme_io_md": false, 00:07:17.793 "write_zeroes": true, 00:07:17.793 "zcopy": false, 00:07:17.793 "get_zone_info": false, 00:07:17.793 "zone_management": false, 00:07:17.793 "zone_append": false, 00:07:17.793 "compare": false, 00:07:17.793 "compare_and_write": false, 00:07:17.793 "abort": false, 00:07:17.793 "seek_hole": true, 00:07:17.793 "seek_data": true, 00:07:17.793 "copy": false, 00:07:17.793 "nvme_iov_md": false 00:07:17.793 }, 00:07:17.793 "driver_specific": { 00:07:17.793 "lvol": { 00:07:17.793 "lvol_store_uuid": "7bc2c139-559b-4e86-bb2f-9d44a9964be6", 00:07:17.793 "base_bdev": "aio_bdev", 00:07:17.793 "thin_provision": false, 00:07:17.793 "num_allocated_clusters": 38, 00:07:17.793 "snapshot": false, 00:07:17.793 "clone": false, 00:07:17.793 "esnap_clone": false 00:07:17.793 } 00:07:17.793 } 00:07:17.793 } 00:07:17.793 ] 00:07:17.793 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:17.793 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:17.793 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:18.052 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:18.052 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:18.052 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:18.311 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:18.311 12:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 08c589ef-86d1-4161-89f8-d66814177973 00:07:18.569 12:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7bc2c139-559b-4e86-bb2f-9d44a9964be6 00:07:18.828 12:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.086 12:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:19.345 ************************************ 00:07:19.345 END TEST lvs_grow_dirty 00:07:19.345 ************************************ 00:07:19.345 00:07:19.345 real 0m20.549s 00:07:19.345 user 0m40.010s 00:07:19.345 sys 0m9.468s 00:07:19.345 12:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.345 12:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:19.604 nvmf_trace.0 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.604 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.171 rmmod nvme_tcp 00:07:20.171 rmmod nvme_fabrics 00:07:20.171 rmmod nvme_keyring 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63403 ']' 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63403 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63403 ']' 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63403 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63403 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.171 killing process with pid 63403 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63403' 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63403 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63403 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.171 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:20.431 12:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:20.431 00:07:20.431 real 0m40.495s 00:07:20.431 user 1m3.148s 00:07:20.431 sys 0m12.906s 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.431 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.431 ************************************ 00:07:20.431 END TEST nvmf_lvs_grow 00:07:20.431 ************************************ 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.690 ************************************ 00:07:20.690 START TEST nvmf_bdev_io_wait 00:07:20.690 ************************************ 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:20.690 * Looking for test storage... 00:07:20.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.690 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.691 --rc genhtml_branch_coverage=1 00:07:20.691 --rc genhtml_function_coverage=1 00:07:20.691 --rc genhtml_legend=1 00:07:20.691 --rc geninfo_all_blocks=1 00:07:20.691 --rc geninfo_unexecuted_blocks=1 00:07:20.691 00:07:20.691 ' 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.691 --rc genhtml_branch_coverage=1 00:07:20.691 --rc genhtml_function_coverage=1 00:07:20.691 --rc genhtml_legend=1 00:07:20.691 --rc geninfo_all_blocks=1 00:07:20.691 --rc geninfo_unexecuted_blocks=1 00:07:20.691 00:07:20.691 ' 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.691 --rc genhtml_branch_coverage=1 00:07:20.691 --rc genhtml_function_coverage=1 00:07:20.691 --rc genhtml_legend=1 00:07:20.691 --rc geninfo_all_blocks=1 00:07:20.691 --rc geninfo_unexecuted_blocks=1 00:07:20.691 00:07:20.691 ' 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.691 --rc genhtml_branch_coverage=1 00:07:20.691 --rc genhtml_function_coverage=1 00:07:20.691 --rc genhtml_legend=1 00:07:20.691 --rc geninfo_all_blocks=1 00:07:20.691 --rc geninfo_unexecuted_blocks=1 00:07:20.691 00:07:20.691 ' 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.691 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.692 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:20.692 Cannot find device "nvmf_init_br" 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:20.692 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:20.951 Cannot find device "nvmf_init_br2" 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:20.951 Cannot find device "nvmf_tgt_br" 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.951 Cannot find device "nvmf_tgt_br2" 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:20.951 Cannot find device "nvmf_init_br" 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:20.951 Cannot find device "nvmf_init_br2" 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:20.951 Cannot find device "nvmf_tgt_br" 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:20.951 Cannot find device "nvmf_tgt_br2" 00:07:20.951 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:20.952 Cannot find device "nvmf_br" 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:20.952 Cannot find device "nvmf_init_if" 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:20.952 Cannot find device "nvmf_init_if2" 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:20.952 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:21.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:07:21.211 00:07:21.211 --- 10.0.0.3 ping statistics --- 00:07:21.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.211 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:21.211 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:21.211 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:07:21.211 00:07:21.211 --- 10.0.0.4 ping statistics --- 00:07:21.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.211 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:21.211 00:07:21.211 --- 10.0.0.1 ping statistics --- 00:07:21.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.211 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:21.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:07:21.211 00:07:21.211 --- 10.0.0.2 ping statistics --- 00:07:21.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.211 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63769 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63769 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63769 ']' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.211 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.211 [2024-11-15 12:42:29.752491] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:21.211 [2024-11-15 12:42:29.752563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.471 [2024-11-15 12:42:29.892513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.471 [2024-11-15 12:42:29.921283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.471 [2024-11-15 12:42:29.921342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.471 [2024-11-15 12:42:29.921352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.471 [2024-11-15 12:42:29.921358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.471 [2024-11-15 12:42:29.921364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.471 [2024-11-15 12:42:29.922106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.471 [2024-11-15 12:42:29.922238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.471 [2024-11-15 12:42:29.922335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.471 [2024-11-15 12:42:29.922336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.471 12:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.471 [2024-11-15 12:42:30.038663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.471 [2024-11-15 12:42:30.053154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.471 Malloc0 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:21.471 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.472 [2024-11-15 12:42:30.099326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63802 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63804 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:21.472 { 00:07:21.472 "params": { 00:07:21.472 "name": "Nvme$subsystem", 00:07:21.472 "trtype": "$TEST_TRANSPORT", 00:07:21.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:21.472 "adrfam": "ipv4", 00:07:21.472 "trsvcid": "$NVMF_PORT", 00:07:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:21.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:21.472 "hdgst": ${hdgst:-false}, 00:07:21.472 "ddgst": ${ddgst:-false} 00:07:21.472 }, 00:07:21.472 "method": "bdev_nvme_attach_controller" 00:07:21.472 } 00:07:21.472 EOF 00:07:21.472 )") 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63806 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:21.472 { 00:07:21.472 "params": { 00:07:21.472 "name": "Nvme$subsystem", 00:07:21.472 "trtype": "$TEST_TRANSPORT", 00:07:21.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:21.472 "adrfam": "ipv4", 00:07:21.472 "trsvcid": "$NVMF_PORT", 00:07:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:21.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:21.472 "hdgst": ${hdgst:-false}, 00:07:21.472 "ddgst": ${ddgst:-false} 00:07:21.472 }, 00:07:21.472 "method": "bdev_nvme_attach_controller" 00:07:21.472 } 00:07:21.472 EOF 00:07:21.472 )") 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63809 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:21.472 { 00:07:21.472 "params": { 00:07:21.472 "name": "Nvme$subsystem", 00:07:21.472 "trtype": "$TEST_TRANSPORT", 00:07:21.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:21.472 "adrfam": "ipv4", 00:07:21.472 "trsvcid": "$NVMF_PORT", 00:07:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:21.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:21.472 "hdgst": ${hdgst:-false}, 00:07:21.472 "ddgst": ${ddgst:-false} 00:07:21.472 }, 00:07:21.472 "method": "bdev_nvme_attach_controller" 00:07:21.472 } 00:07:21.472 EOF 00:07:21.472 )") 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:21.472 { 00:07:21.472 "params": { 00:07:21.472 "name": "Nvme$subsystem", 00:07:21.472 "trtype": "$TEST_TRANSPORT", 00:07:21.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:21.472 "adrfam": "ipv4", 00:07:21.472 "trsvcid": "$NVMF_PORT", 00:07:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:21.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:21.472 "hdgst": ${hdgst:-false}, 00:07:21.472 "ddgst": ${ddgst:-false} 00:07:21.472 }, 00:07:21.472 "method": "bdev_nvme_attach_controller" 00:07:21.472 } 00:07:21.472 EOF 00:07:21.472 )") 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:21.472 "params": { 00:07:21.472 "name": "Nvme1", 00:07:21.472 "trtype": "tcp", 00:07:21.472 "traddr": "10.0.0.3", 00:07:21.472 "adrfam": "ipv4", 00:07:21.472 "trsvcid": "4420", 00:07:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:21.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:21.472 "hdgst": false, 00:07:21.472 "ddgst": false 00:07:21.472 }, 00:07:21.472 "method": "bdev_nvme_attach_controller" 00:07:21.472 }' 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:21.472 "params": { 00:07:21.472 "name": "Nvme1", 00:07:21.472 "trtype": "tcp", 00:07:21.472 "traddr": "10.0.0.3", 00:07:21.472 "adrfam": "ipv4", 00:07:21.472 "trsvcid": "4420", 00:07:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:21.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:21.472 "hdgst": false, 00:07:21.472 "ddgst": false 00:07:21.472 }, 00:07:21.472 "method": "bdev_nvme_attach_controller" 00:07:21.472 }' 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:21.472 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:21.472 "params": { 00:07:21.472 "name": "Nvme1", 00:07:21.472 "trtype": "tcp", 00:07:21.472 "traddr": "10.0.0.3", 00:07:21.472 "adrfam": "ipv4", 00:07:21.472 "trsvcid": "4420", 00:07:21.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:21.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:21.472 "hdgst": false, 00:07:21.472 "ddgst": false 00:07:21.472 }, 00:07:21.472 "method": "bdev_nvme_attach_controller" 00:07:21.472 }' 00:07:21.732 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:21.732 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:21.732 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:21.732 "params": { 00:07:21.732 "name": "Nvme1", 00:07:21.732 "trtype": "tcp", 00:07:21.732 "traddr": "10.0.0.3", 00:07:21.732 "adrfam": "ipv4", 00:07:21.732 "trsvcid": "4420", 00:07:21.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:21.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:21.732 "hdgst": false, 00:07:21.732 "ddgst": false 00:07:21.732 }, 00:07:21.732 "method": "bdev_nvme_attach_controller" 00:07:21.732 }' 00:07:21.732 [2024-11-15 12:42:30.165580] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:21.732 [2024-11-15 12:42:30.165682] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:21.732 [2024-11-15 12:42:30.171336] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:21.732 [2024-11-15 12:42:30.171425] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:21.732 12:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63802 00:07:21.732 [2024-11-15 12:42:30.181455] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:21.732 [2024-11-15 12:42:30.181527] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:21.732 [2024-11-15 12:42:30.199227] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:21.732 [2024-11-15 12:42:30.199333] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:21.732 [2024-11-15 12:42:30.356248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.732 [2024-11-15 12:42:30.387646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:21.732 [2024-11-15 12:42:30.395145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.991 [2024-11-15 12:42:30.401487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.991 [2024-11-15 12:42:30.427062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.991 [2024-11-15 12:42:30.431069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.991 [2024-11-15 12:42:30.440864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.991 [2024-11-15 12:42:30.456220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:21.991 [2024-11-15 12:42:30.469249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.991 [2024-11-15 12:42:30.477847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.991 Running I/O for 1 seconds... 00:07:21.991 [2024-11-15 12:42:30.508809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:21.991 [2024-11-15 12:42:30.522522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.991 Running I/O for 1 seconds... 00:07:21.991 Running I/O for 1 seconds... 00:07:21.991 Running I/O for 1 seconds... 00:07:22.927 175512.00 IOPS, 685.59 MiB/s 00:07:22.927 Latency(us) 00:07:22.927 [2024-11-15T12:42:31.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.927 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:22.927 Nvme1n1 : 1.00 175172.98 684.27 0.00 0.00 726.92 370.50 1936.29 00:07:22.927 [2024-11-15T12:42:31.597Z] =================================================================================================================== 00:07:22.927 [2024-11-15T12:42:31.597Z] Total : 175172.98 684.27 0.00 0.00 726.92 370.50 1936.29 00:07:22.927 10494.00 IOPS, 40.99 MiB/s 00:07:22.927 Latency(us) 00:07:22.927 [2024-11-15T12:42:31.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.927 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:22.927 Nvme1n1 : 1.01 10545.81 41.19 0.00 0.00 12087.12 5898.24 18945.86 00:07:22.927 [2024-11-15T12:42:31.597Z] =================================================================================================================== 00:07:22.927 [2024-11-15T12:42:31.597Z] Total : 10545.81 41.19 0.00 0.00 12087.12 5898.24 18945.86 00:07:22.927 7661.00 IOPS, 29.93 MiB/s 00:07:22.927 Latency(us) 00:07:22.927 [2024-11-15T12:42:31.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.927 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:22.927 Nvme1n1 : 1.01 7713.77 30.13 0.00 0.00 16505.40 8698.41 26214.40 00:07:22.927 [2024-11-15T12:42:31.597Z] =================================================================================================================== 00:07:22.927 [2024-11-15T12:42:31.597Z] Total : 7713.77 30.13 0.00 0.00 16505.40 8698.41 26214.40 00:07:23.186 8564.00 IOPS, 33.45 MiB/s 00:07:23.187 Latency(us) 00:07:23.187 [2024-11-15T12:42:31.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.187 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:23.187 Nvme1n1 : 1.01 8649.71 33.79 0.00 0.00 14739.99 6404.65 26095.24 00:07:23.187 [2024-11-15T12:42:31.857Z] =================================================================================================================== 00:07:23.187 [2024-11-15T12:42:31.857Z] Total : 8649.71 33.79 0.00 0.00 14739.99 6404.65 26095.24 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63804 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63806 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63809 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.187 rmmod nvme_tcp 00:07:23.187 rmmod nvme_fabrics 00:07:23.187 rmmod nvme_keyring 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63769 ']' 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63769 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63769 ']' 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63769 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.187 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63769 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.446 killing process with pid 63769 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63769' 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63769 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63769 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:23.446 12:42:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:23.446 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:23.705 00:07:23.705 real 0m3.101s 00:07:23.705 user 0m12.380s 00:07:23.705 sys 0m2.031s 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.705 ************************************ 00:07:23.705 END TEST nvmf_bdev_io_wait 00:07:23.705 ************************************ 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.705 ************************************ 00:07:23.705 START TEST nvmf_queue_depth 00:07:23.705 ************************************ 00:07:23.705 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:23.705 * Looking for test storage... 00:07:23.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.967 --rc genhtml_branch_coverage=1 00:07:23.967 --rc genhtml_function_coverage=1 00:07:23.967 --rc genhtml_legend=1 00:07:23.967 --rc geninfo_all_blocks=1 00:07:23.967 --rc geninfo_unexecuted_blocks=1 00:07:23.967 00:07:23.967 ' 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.967 --rc genhtml_branch_coverage=1 00:07:23.967 --rc genhtml_function_coverage=1 00:07:23.967 --rc genhtml_legend=1 00:07:23.967 --rc geninfo_all_blocks=1 00:07:23.967 --rc geninfo_unexecuted_blocks=1 00:07:23.967 00:07:23.967 ' 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.967 --rc genhtml_branch_coverage=1 00:07:23.967 --rc genhtml_function_coverage=1 00:07:23.967 --rc genhtml_legend=1 00:07:23.967 --rc geninfo_all_blocks=1 00:07:23.967 --rc geninfo_unexecuted_blocks=1 00:07:23.967 00:07:23.967 ' 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.967 --rc genhtml_branch_coverage=1 00:07:23.967 --rc genhtml_function_coverage=1 00:07:23.967 --rc genhtml_legend=1 00:07:23.967 --rc geninfo_all_blocks=1 00:07:23.967 --rc geninfo_unexecuted_blocks=1 00:07:23.967 00:07:23.967 ' 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.967 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.968 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:23.968 Cannot find device "nvmf_init_br" 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:23.968 Cannot find device "nvmf_init_br2" 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:23.968 Cannot find device "nvmf_tgt_br" 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:23.968 Cannot find device "nvmf_tgt_br2" 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:23.968 Cannot find device "nvmf_init_br" 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:23.968 Cannot find device "nvmf_init_br2" 00:07:23.968 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:23.969 Cannot find device "nvmf_tgt_br" 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:23.969 Cannot find device "nvmf_tgt_br2" 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:23.969 Cannot find device "nvmf_br" 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:23.969 Cannot find device "nvmf_init_if" 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:23.969 Cannot find device "nvmf_init_if2" 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:23.969 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:24.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.228 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:24.228 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:24.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.228 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:24.228 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:24.228 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:24.228 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:24.228 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:24.229 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:24.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:24.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:07:24.489 00:07:24.489 --- 10.0.0.3 ping statistics --- 00:07:24.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.489 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:24.489 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:24.489 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:07:24.489 00:07:24.489 --- 10.0.0.4 ping statistics --- 00:07:24.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.489 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:24.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:24.489 00:07:24.489 --- 10.0.0.1 ping statistics --- 00:07:24.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.489 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:24.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:07:24.489 00:07:24.489 --- 10.0.0.2 ping statistics --- 00:07:24.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.489 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.489 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64063 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64063 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64063 ']' 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.490 12:42:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:24.490 [2024-11-15 12:42:33.005964] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:24.490 [2024-11-15 12:42:33.006073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.749 [2024-11-15 12:42:33.162755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.749 [2024-11-15 12:42:33.199898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.749 [2024-11-15 12:42:33.199965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.749 [2024-11-15 12:42:33.199979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.749 [2024-11-15 12:42:33.199989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.749 [2024-11-15 12:42:33.199998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.749 [2024-11-15 12:42:33.200363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.749 [2024-11-15 12:42:33.235364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.317 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:25.317 [2024-11-15 12:42:33.983627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.576 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.576 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:25.576 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.576 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:25.576 Malloc0 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:25.576 [2024-11-15 12:42:34.037254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64099 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64099 /var/tmp/bdevperf.sock 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64099 ']' 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.576 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:25.576 [2024-11-15 12:42:34.103530] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:25.576 [2024-11-15 12:42:34.103680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64099 ] 00:07:25.835 [2024-11-15 12:42:34.259888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.835 [2024-11-15 12:42:34.298429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.835 [2024-11-15 12:42:34.330479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.403 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.403 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:26.403 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:26.403 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.403 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:26.662 NVMe0n1 00:07:26.662 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.662 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:26.662 Running I/O for 10 seconds... 00:07:28.535 8261.00 IOPS, 32.27 MiB/s [2024-11-15T12:42:38.583Z] 8999.50 IOPS, 35.15 MiB/s [2024-11-15T12:42:39.519Z] 9184.33 IOPS, 35.88 MiB/s [2024-11-15T12:42:40.454Z] 9451.00 IOPS, 36.92 MiB/s [2024-11-15T12:42:41.390Z] 9651.20 IOPS, 37.70 MiB/s [2024-11-15T12:42:42.326Z] 9798.83 IOPS, 38.28 MiB/s [2024-11-15T12:42:43.262Z] 9924.43 IOPS, 38.77 MiB/s [2024-11-15T12:42:44.198Z] 10009.38 IOPS, 39.10 MiB/s [2024-11-15T12:42:45.576Z] 10060.56 IOPS, 39.30 MiB/s [2024-11-15T12:42:45.576Z] 10134.40 IOPS, 39.59 MiB/s 00:07:36.906 Latency(us) 00:07:36.906 [2024-11-15T12:42:45.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.906 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:36.906 Verification LBA range: start 0x0 length 0x4000 00:07:36.906 NVMe0n1 : 10.08 10159.00 39.68 0.00 0.00 100375.40 21328.99 78166.57 00:07:36.906 [2024-11-15T12:42:45.576Z] =================================================================================================================== 00:07:36.906 [2024-11-15T12:42:45.576Z] Total : 10159.00 39.68 0.00 0.00 100375.40 21328.99 78166.57 00:07:36.906 { 00:07:36.906 "results": [ 00:07:36.906 { 00:07:36.906 "job": "NVMe0n1", 00:07:36.906 "core_mask": "0x1", 00:07:36.906 "workload": "verify", 00:07:36.906 "status": "finished", 00:07:36.906 "verify_range": { 00:07:36.906 "start": 0, 00:07:36.906 "length": 16384 00:07:36.906 }, 00:07:36.906 "queue_depth": 1024, 00:07:36.906 "io_size": 4096, 00:07:36.906 "runtime": 10.076584, 00:07:36.906 "iops": 10158.998327210888, 00:07:36.906 "mibps": 39.68358721566753, 00:07:36.906 "io_failed": 0, 00:07:36.906 "io_timeout": 0, 00:07:36.906 "avg_latency_us": 100375.4045502856, 00:07:36.906 "min_latency_us": 21328.98909090909, 00:07:36.906 "max_latency_us": 78166.57454545454 00:07:36.906 } 00:07:36.906 ], 00:07:36.906 "core_count": 1 00:07:36.906 } 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64099 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64099 ']' 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64099 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64099 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.906 killing process with pid 64099 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64099' 00:07:36.906 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.906 00:07:36.906 Latency(us) 00:07:36.906 [2024-11-15T12:42:45.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.906 [2024-11-15T12:42:45.576Z] =================================================================================================================== 00:07:36.906 [2024-11-15T12:42:45.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64099 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64099 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.906 rmmod nvme_tcp 00:07:36.906 rmmod nvme_fabrics 00:07:36.906 rmmod nvme_keyring 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64063 ']' 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64063 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64063 ']' 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64063 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.906 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64063 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:37.166 killing process with pid 64063 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64063' 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64063 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64063 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:37.166 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:07:37.425 00:07:37.425 real 0m13.689s 00:07:37.425 user 0m23.357s 00:07:37.425 sys 0m2.053s 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.425 12:42:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.425 ************************************ 00:07:37.425 END TEST nvmf_queue_depth 00:07:37.425 ************************************ 00:07:37.425 12:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:37.425 12:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.425 12:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.425 12:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.425 ************************************ 00:07:37.425 START TEST nvmf_target_multipath 00:07:37.425 ************************************ 00:07:37.425 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:37.686 * Looking for test storage... 00:07:37.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.686 --rc genhtml_branch_coverage=1 00:07:37.686 --rc genhtml_function_coverage=1 00:07:37.686 --rc genhtml_legend=1 00:07:37.686 --rc geninfo_all_blocks=1 00:07:37.686 --rc geninfo_unexecuted_blocks=1 00:07:37.686 00:07:37.686 ' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.686 --rc genhtml_branch_coverage=1 00:07:37.686 --rc genhtml_function_coverage=1 00:07:37.686 --rc genhtml_legend=1 00:07:37.686 --rc geninfo_all_blocks=1 00:07:37.686 --rc geninfo_unexecuted_blocks=1 00:07:37.686 00:07:37.686 ' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.686 --rc genhtml_branch_coverage=1 00:07:37.686 --rc genhtml_function_coverage=1 00:07:37.686 --rc genhtml_legend=1 00:07:37.686 --rc geninfo_all_blocks=1 00:07:37.686 --rc geninfo_unexecuted_blocks=1 00:07:37.686 00:07:37.686 ' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.686 --rc genhtml_branch_coverage=1 00:07:37.686 --rc genhtml_function_coverage=1 00:07:37.686 --rc genhtml_legend=1 00:07:37.686 --rc geninfo_all_blocks=1 00:07:37.686 --rc geninfo_unexecuted_blocks=1 00:07:37.686 00:07:37.686 ' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.686 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.687 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:37.687 Cannot find device "nvmf_init_br" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:37.687 Cannot find device "nvmf_init_br2" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:37.687 Cannot find device "nvmf_tgt_br" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:37.687 Cannot find device "nvmf_tgt_br2" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:37.687 Cannot find device "nvmf_init_br" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:37.687 Cannot find device "nvmf_init_br2" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:37.687 Cannot find device "nvmf_tgt_br" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:37.687 Cannot find device "nvmf_tgt_br2" 00:07:37.687 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:37.947 Cannot find device "nvmf_br" 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:37.947 Cannot find device "nvmf_init_if" 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:37.947 Cannot find device "nvmf_init_if2" 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:37.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:37.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:37.947 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:38.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:38.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:38.206 00:07:38.206 --- 10.0.0.3 ping statistics --- 00:07:38.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.206 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:38.206 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:38.206 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:07:38.206 00:07:38.206 --- 10.0.0.4 ping statistics --- 00:07:38.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.206 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:38.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:38.206 00:07:38.206 --- 10.0.0.1 ping statistics --- 00:07:38.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.206 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:38.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:07:38.206 00:07:38.206 --- 10.0.0.2 ping statistics --- 00:07:38.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.206 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64465 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64465 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64465 ']' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.206 12:42:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:38.206 [2024-11-15 12:42:46.734589] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:38.206 [2024-11-15 12:42:46.734691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.465 [2024-11-15 12:42:46.889778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.465 [2024-11-15 12:42:46.930745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.465 [2024-11-15 12:42:46.930813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.465 [2024-11-15 12:42:46.930828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.465 [2024-11-15 12:42:46.930838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.466 [2024-11-15 12:42:46.930847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.466 [2024-11-15 12:42:46.931754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.466 [2024-11-15 12:42:46.931892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.466 [2024-11-15 12:42:46.931995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.466 [2024-11-15 12:42:46.931994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.466 [2024-11-15 12:42:46.968401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.466 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.466 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:07:38.466 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.466 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.466 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:38.466 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.466 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.725 [2024-11-15 12:42:47.352163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.725 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:07:39.292 Malloc0 00:07:39.292 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:07:39.292 12:42:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.551 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:39.810 [2024-11-15 12:42:48.394435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:39.810 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:07:40.069 [2024-11-15 12:42:48.658710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:07:40.069 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:07:40.328 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:07:40.328 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.328 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:07:40.328 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.328 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:07:40.328 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64547 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:07:42.890 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:42.890 [global] 00:07:42.890 thread=1 00:07:42.890 invalidate=1 00:07:42.890 rw=randrw 00:07:42.890 time_based=1 00:07:42.890 runtime=6 00:07:42.890 ioengine=libaio 00:07:42.890 direct=1 00:07:42.890 bs=4096 00:07:42.890 iodepth=128 00:07:42.890 norandommap=0 00:07:42.890 numjobs=1 00:07:42.890 00:07:42.890 verify_dump=1 00:07:42.890 verify_backlog=512 00:07:42.890 verify_state_save=0 00:07:42.890 do_verify=1 00:07:42.890 verify=crc32c-intel 00:07:42.890 [job0] 00:07:42.890 filename=/dev/nvme0n1 00:07:42.890 Could not set queue depth (nvme0n1) 00:07:42.890 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:42.890 fio-3.35 00:07:42.890 Starting 1 thread 00:07:43.505 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:43.765 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:44.025 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:44.285 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:44.544 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64547 00:07:48.737 00:07:48.737 job0: (groupid=0, jobs=1): err= 0: pid=64573: Fri Nov 15 12:42:57 2024 00:07:48.737 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(261MiB/6002msec) 00:07:48.737 slat (usec): min=4, max=6361, avg=53.11, stdev=202.96 00:07:48.737 clat (usec): min=1413, max=14887, avg=7807.78, stdev=1316.93 00:07:48.737 lat (usec): min=1612, max=14896, avg=7860.88, stdev=1320.60 00:07:48.737 clat percentiles (usec): 00:07:48.737 | 1.00th=[ 4178], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7111], 00:07:48.737 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:07:48.737 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 8979], 95.00th=[10814], 00:07:48.737 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13304], 99.95th=[13566], 00:07:48.737 | 99.99th=[13960] 00:07:48.737 bw ( KiB/s): min=10896, max=28640, per=51.60%, avg=22974.45, stdev=5768.96, samples=11 00:07:48.737 iops : min= 2724, max= 7160, avg=5743.55, stdev=1442.22, samples=11 00:07:48.737 write: IOPS=6648, BW=26.0MiB/s (27.2MB/s)(137MiB/5276msec); 0 zone resets 00:07:48.737 slat (usec): min=14, max=1887, avg=60.37, stdev=144.28 00:07:48.737 clat (usec): min=1412, max=14032, avg=6837.52, stdev=1177.99 00:07:48.737 lat (usec): min=1441, max=14078, avg=6897.88, stdev=1183.12 00:07:48.737 clat percentiles (usec): 00:07:48.737 | 1.00th=[ 3228], 5.00th=[ 4146], 10.00th=[ 5473], 20.00th=[ 6325], 00:07:48.737 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:07:48.737 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:07:48.737 | 99.00th=[10290], 99.50th=[10814], 99.90th=[11994], 99.95th=[12256], 00:07:48.737 | 99.99th=[13960] 00:07:48.737 bw ( KiB/s): min=11552, max=27928, per=86.56%, avg=23020.09, stdev=5384.38, samples=11 00:07:48.737 iops : min= 2888, max= 6982, avg=5755.00, stdev=1346.08, samples=11 00:07:48.737 lat (msec) : 2=0.02%, 4=1.93%, 10=93.03%, 20=5.02% 00:07:48.737 cpu : usr=5.86%, sys=22.44%, ctx=5932, majf=0, minf=108 00:07:48.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:07:48.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:48.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:48.737 issued rwts: total=66809,35078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:48.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:48.737 00:07:48.737 Run status group 0 (all jobs): 00:07:48.737 READ: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=261MiB (274MB), run=6002-6002msec 00:07:48.737 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=137MiB (144MB), run=5276-5276msec 00:07:48.737 00:07:48.737 Disk stats (read/write): 00:07:48.737 nvme0n1: ios=65761/34546, merge=0/0, ticks=491220/221142, in_queue=712362, util=98.56% 00:07:48.737 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:07:48.996 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64654 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:49.254 12:42:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:07:49.254 [global] 00:07:49.254 thread=1 00:07:49.254 invalidate=1 00:07:49.254 rw=randrw 00:07:49.254 time_based=1 00:07:49.254 runtime=6 00:07:49.254 ioengine=libaio 00:07:49.254 direct=1 00:07:49.254 bs=4096 00:07:49.254 iodepth=128 00:07:49.254 norandommap=0 00:07:49.254 numjobs=1 00:07:49.254 00:07:49.254 verify_dump=1 00:07:49.254 verify_backlog=512 00:07:49.254 verify_state_save=0 00:07:49.254 do_verify=1 00:07:49.254 verify=crc32c-intel 00:07:49.254 [job0] 00:07:49.254 filename=/dev/nvme0n1 00:07:49.254 Could not set queue depth (nvme0n1) 00:07:49.512 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:49.512 fio-3.35 00:07:49.512 Starting 1 thread 00:07:50.448 12:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:50.448 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:51.017 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:51.586 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.587 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:51.587 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:51.587 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:51.587 12:42:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64654 00:07:55.776 00:07:55.776 job0: (groupid=0, jobs=1): err= 0: pid=64675: Fri Nov 15 12:43:04 2024 00:07:55.776 read: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(282MiB/6005msec) 00:07:55.776 slat (usec): min=5, max=7963, avg=41.71, stdev=181.70 00:07:55.776 clat (usec): min=288, max=15768, avg=7337.01, stdev=1785.87 00:07:55.776 lat (usec): min=314, max=15796, avg=7378.73, stdev=1800.70 00:07:55.776 clat percentiles (usec): 00:07:55.776 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5800], 00:07:55.776 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7832], 00:07:55.776 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10552], 00:07:55.776 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13566], 99.95th=[13829], 00:07:55.776 | 99.99th=[14353] 00:07:55.776 bw ( KiB/s): min= 9280, max=45448, per=52.46%, avg=25197.27, stdev=9466.85, samples=11 00:07:55.776 iops : min= 2320, max=11362, avg=6299.27, stdev=2366.73, samples=11 00:07:55.776 write: IOPS=7163, BW=28.0MiB/s (29.3MB/s)(149MiB/5308msec); 0 zone resets 00:07:55.776 slat (usec): min=12, max=3260, avg=53.17, stdev=128.56 00:07:55.776 clat (usec): min=918, max=14540, avg=6136.31, stdev=1733.71 00:07:55.776 lat (usec): min=977, max=14577, avg=6189.48, stdev=1748.00 00:07:55.776 clat percentiles (usec): 00:07:55.776 | 1.00th=[ 2606], 5.00th=[ 3228], 10.00th=[ 3621], 20.00th=[ 4228], 00:07:55.776 | 30.00th=[ 4883], 40.00th=[ 6194], 50.00th=[ 6783], 60.00th=[ 7046], 00:07:55.776 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8160], 00:07:55.776 | 99.00th=[ 9896], 99.50th=[10945], 99.90th=[12256], 99.95th=[12780], 00:07:55.776 | 99.99th=[14484] 00:07:55.776 bw ( KiB/s): min= 9376, max=44632, per=87.98%, avg=25210.18, stdev=9242.61, samples=11 00:07:55.776 iops : min= 2344, max=11158, avg=6302.45, stdev=2310.71, samples=11 00:07:55.776 lat (usec) : 500=0.01%, 1000=0.01% 00:07:55.776 lat (msec) : 2=0.09%, 4=7.99%, 10=87.73%, 20=4.19% 00:07:55.776 cpu : usr=6.43%, sys=24.75%, ctx=6156, majf=0, minf=108 00:07:55.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:07:55.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:55.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:55.776 issued rwts: total=72110,38025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:55.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:55.776 00:07:55.776 Run status group 0 (all jobs): 00:07:55.776 READ: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=282MiB (295MB), run=6005-6005msec 00:07:55.776 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=149MiB (156MB), run=5308-5308msec 00:07:55.776 00:07:55.776 Disk stats (read/write): 00:07:55.776 nvme0n1: ios=71165/37377, merge=0/0, ticks=495815/212075, in_queue=707890, util=98.65% 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:07:55.776 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.034 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:07:56.034 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:07:56.034 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:07:56.034 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:07:56.034 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.034 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.035 rmmod nvme_tcp 00:07:56.035 rmmod nvme_fabrics 00:07:56.035 rmmod nvme_keyring 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64465 ']' 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64465 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64465 ']' 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64465 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64465 00:07:56.035 killing process with pid 64465 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64465' 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64465 00:07:56.035 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64465 00:07:56.292 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.292 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.292 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.292 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:56.292 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.292 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:56.293 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.550 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.550 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:56.551 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.551 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.551 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:07:56.551 00:07:56.551 real 0m18.979s 00:07:56.551 user 1m10.172s 00:07:56.551 sys 0m9.889s 00:07:56.551 ************************************ 00:07:56.551 END TEST nvmf_target_multipath 00:07:56.551 ************************************ 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.551 ************************************ 00:07:56.551 START TEST nvmf_zcopy 00:07:56.551 ************************************ 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:56.551 * Looking for test storage... 00:07:56.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.551 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.810 --rc genhtml_branch_coverage=1 00:07:56.810 --rc genhtml_function_coverage=1 00:07:56.810 --rc genhtml_legend=1 00:07:56.810 --rc geninfo_all_blocks=1 00:07:56.810 --rc geninfo_unexecuted_blocks=1 00:07:56.810 00:07:56.810 ' 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.810 --rc genhtml_branch_coverage=1 00:07:56.810 --rc genhtml_function_coverage=1 00:07:56.810 --rc genhtml_legend=1 00:07:56.810 --rc geninfo_all_blocks=1 00:07:56.810 --rc geninfo_unexecuted_blocks=1 00:07:56.810 00:07:56.810 ' 00:07:56.810 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.811 --rc genhtml_branch_coverage=1 00:07:56.811 --rc genhtml_function_coverage=1 00:07:56.811 --rc genhtml_legend=1 00:07:56.811 --rc geninfo_all_blocks=1 00:07:56.811 --rc geninfo_unexecuted_blocks=1 00:07:56.811 00:07:56.811 ' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.811 --rc genhtml_branch_coverage=1 00:07:56.811 --rc genhtml_function_coverage=1 00:07:56.811 --rc genhtml_legend=1 00:07:56.811 --rc geninfo_all_blocks=1 00:07:56.811 --rc geninfo_unexecuted_blocks=1 00:07:56.811 00:07:56.811 ' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.811 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:56.811 Cannot find device "nvmf_init_br" 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:56.811 Cannot find device "nvmf_init_br2" 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:56.811 Cannot find device "nvmf_tgt_br" 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:07:56.811 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.811 Cannot find device "nvmf_tgt_br2" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:56.812 Cannot find device "nvmf_init_br" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:56.812 Cannot find device "nvmf_init_br2" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:56.812 Cannot find device "nvmf_tgt_br" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:56.812 Cannot find device "nvmf_tgt_br2" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:56.812 Cannot find device "nvmf_br" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:56.812 Cannot find device "nvmf_init_if" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:56.812 Cannot find device "nvmf_init_if2" 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:56.812 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:57.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:57.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:07:57.071 00:07:57.071 --- 10.0.0.3 ping statistics --- 00:07:57.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.071 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:57.071 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:57.071 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:07:57.071 00:07:57.071 --- 10.0.0.4 ping statistics --- 00:07:57.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.071 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:57.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:57.071 00:07:57.071 --- 10.0.0.1 ping statistics --- 00:07:57.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.071 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:57.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:07:57.071 00:07:57.071 --- 10.0.0.2 ping statistics --- 00:07:57.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.071 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=64978 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 64978 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 64978 ']' 00:07:57.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.071 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.071 [2024-11-15 12:43:05.729891] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:57.071 [2024-11-15 12:43:05.729968] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.330 [2024-11-15 12:43:05.868876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.330 [2024-11-15 12:43:05.897925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.330 [2024-11-15 12:43:05.898216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.330 [2024-11-15 12:43:05.898251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.330 [2024-11-15 12:43:05.898258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.330 [2024-11-15 12:43:05.898265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.330 [2024-11-15 12:43:05.898532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.330 [2024-11-15 12:43:05.926421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.330 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.330 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:57.330 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.330 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.330 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.589 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.589 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:57.589 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.590 [2024-11-15 12:43:06.015767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.590 [2024-11-15 12:43:06.031835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.590 malloc0 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.590 { 00:07:57.590 "params": { 00:07:57.590 "name": "Nvme$subsystem", 00:07:57.590 "trtype": "$TEST_TRANSPORT", 00:07:57.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.590 "adrfam": "ipv4", 00:07:57.590 "trsvcid": "$NVMF_PORT", 00:07:57.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.590 "hdgst": ${hdgst:-false}, 00:07:57.590 "ddgst": ${ddgst:-false} 00:07:57.590 }, 00:07:57.590 "method": "bdev_nvme_attach_controller" 00:07:57.590 } 00:07:57.590 EOF 00:07:57.590 )") 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:57.590 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.590 "params": { 00:07:57.590 "name": "Nvme1", 00:07:57.590 "trtype": "tcp", 00:07:57.590 "traddr": "10.0.0.3", 00:07:57.590 "adrfam": "ipv4", 00:07:57.590 "trsvcid": "4420", 00:07:57.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.590 "hdgst": false, 00:07:57.590 "ddgst": false 00:07:57.590 }, 00:07:57.590 "method": "bdev_nvme_attach_controller" 00:07:57.590 }' 00:07:57.590 [2024-11-15 12:43:06.109708] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:07:57.590 [2024-11-15 12:43:06.109808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65003 ] 00:07:57.590 [2024-11-15 12:43:06.257100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.849 [2024-11-15 12:43:06.296025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.849 [2024-11-15 12:43:06.337634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.849 Running I/O for 10 seconds... 00:08:00.164 6820.00 IOPS, 53.28 MiB/s [2024-11-15T12:43:09.771Z] 6940.50 IOPS, 54.22 MiB/s [2024-11-15T12:43:10.708Z] 6987.33 IOPS, 54.59 MiB/s [2024-11-15T12:43:11.647Z] 6995.75 IOPS, 54.65 MiB/s [2024-11-15T12:43:12.584Z] 7013.80 IOPS, 54.80 MiB/s [2024-11-15T12:43:13.521Z] 7035.00 IOPS, 54.96 MiB/s [2024-11-15T12:43:14.456Z] 7050.14 IOPS, 55.08 MiB/s [2024-11-15T12:43:15.833Z] 7063.38 IOPS, 55.18 MiB/s [2024-11-15T12:43:16.771Z] 7073.11 IOPS, 55.26 MiB/s [2024-11-15T12:43:16.771Z] 7076.20 IOPS, 55.28 MiB/s 00:08:08.101 Latency(us) 00:08:08.101 [2024-11-15T12:43:16.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.101 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:08.101 Verification LBA range: start 0x0 length 0x1000 00:08:08.101 Nvme1n1 : 10.01 7080.19 55.31 0.00 0.00 18022.57 301.61 32648.84 00:08:08.101 [2024-11-15T12:43:16.771Z] =================================================================================================================== 00:08:08.101 [2024-11-15T12:43:16.771Z] Total : 7080.19 55.31 0.00 0.00 18022.57 301.61 32648.84 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65115 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:08.101 { 00:08:08.101 "params": { 00:08:08.101 "name": "Nvme$subsystem", 00:08:08.101 "trtype": "$TEST_TRANSPORT", 00:08:08.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.101 "adrfam": "ipv4", 00:08:08.101 "trsvcid": "$NVMF_PORT", 00:08:08.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.101 "hdgst": ${hdgst:-false}, 00:08:08.101 "ddgst": ${ddgst:-false} 00:08:08.101 }, 00:08:08.101 "method": "bdev_nvme_attach_controller" 00:08:08.101 } 00:08:08.101 EOF 00:08:08.101 )") 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:08.101 [2024-11-15 12:43:16.585117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.585156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:08.101 12:43:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:08.101 "params": { 00:08:08.101 "name": "Nvme1", 00:08:08.101 "trtype": "tcp", 00:08:08.101 "traddr": "10.0.0.3", 00:08:08.101 "adrfam": "ipv4", 00:08:08.101 "trsvcid": "4420", 00:08:08.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.101 "hdgst": false, 00:08:08.101 "ddgst": false 00:08:08.101 }, 00:08:08.101 "method": "bdev_nvme_attach_controller" 00:08:08.101 }' 00:08:08.101 [2024-11-15 12:43:16.593070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.593096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.601073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.601249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.609081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.609264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.617102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.617347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.629094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.629296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.633102] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:08:08.101 [2024-11-15 12:43:16.633887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65115 ] 00:08:08.101 [2024-11-15 12:43:16.637116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.637317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.645099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.645283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.653108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.101 [2024-11-15 12:43:16.653303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.101 [2024-11-15 12:43:16.661105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.661363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.669105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.669271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.677125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.677313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.685114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.685302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.693104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.693238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.701106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.701236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.709107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.709282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.717112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.717240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.725131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.725288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.733112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.733240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.741114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.741286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.753120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.753280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.761117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.761273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.102 [2024-11-15 12:43:16.769138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.102 [2024-11-15 12:43:16.769350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.777133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.777307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.780789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.362 [2024-11-15 12:43:16.785149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.785176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.797148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.797180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.805141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.805167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.813282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.362 [2024-11-15 12:43:16.817148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.817175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.825144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.825171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.837185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.837225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.845171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.845456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.851225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.362 [2024-11-15 12:43:16.853159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.853187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.861173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.861209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.869154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.869194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.877150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.877174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.889174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.889206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.897173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.897203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.905199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.905237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.913183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.913212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.921186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.921216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.933192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.933220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.941204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.941236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.949206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.949235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 Running I/O for 5 seconds... 00:08:08.362 [2024-11-15 12:43:16.957209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.957237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.970417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.970625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.981831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.982020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:16.991277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:16.991311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:17.001790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:17.001822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:17.012784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:17.012978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.362 [2024-11-15 12:43:17.021538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.362 [2024-11-15 12:43:17.021573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.033614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.033665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.042579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.042652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.054304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.054336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.065745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.065791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.073402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.073451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.089244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.089276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.097786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.097818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.110455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.110489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.121454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.121648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.137088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.137262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.146325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.146357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.161292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.161325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.179023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.179196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.188678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.188712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.202450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.202654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.211387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.211420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.221371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.221444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.230903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.230936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.244720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.244753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.252682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.252713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.264342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.264374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.273780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.273811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.622 [2024-11-15 12:43:17.284957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.622 [2024-11-15 12:43:17.285135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.295383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.295416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.305614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.305678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.317285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.317317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.326327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.326359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.337442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.337476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.349250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.349282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.365760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.365804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.382243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.382275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.393359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.393431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.401340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.401372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.413008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.413071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.423834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.423866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.439977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.440009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.448897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.448930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.463060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.463090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.471547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.471578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.486015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.486190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.502786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.502817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.513801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.513833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.521965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.521999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.533069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.533244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.882 [2024-11-15 12:43:17.544604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.882 [2024-11-15 12:43:17.544834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.141 [2024-11-15 12:43:17.560062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.141 [2024-11-15 12:43:17.560220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.141 [2024-11-15 12:43:17.568394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.141 [2024-11-15 12:43:17.568426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.141 [2024-11-15 12:43:17.583031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.141 [2024-11-15 12:43:17.583203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.141 [2024-11-15 12:43:17.591262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.591309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.602923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.602972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.613755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.613802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.622108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.622141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.633665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.633715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.644554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.644586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.660677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.660708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.678064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.678095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.689672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.689723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.705096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.705142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.717081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.717125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.730061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.730104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.746074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.746117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.757333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.757400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.768975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.769017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.783406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.783440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.799223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.799256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.142 [2024-11-15 12:43:17.808937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.142 [2024-11-15 12:43:17.809000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.823126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.823167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.832908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.832956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.844269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.844300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.854050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.854227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.868012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.868075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.876766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.876798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.890970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.891003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.899449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.899480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.913494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.913529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.929944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.929976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.946198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.946231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 12840.00 IOPS, 100.31 MiB/s [2024-11-15T12:43:18.071Z] [2024-11-15 12:43:17.955469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.955502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.967874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.967906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.977235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.977268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.986757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.986789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:17.996410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:17.996441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:18.005787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:18.005819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:18.015018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:18.015049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:18.028583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:18.028659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:18.036995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:18.037043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:18.048771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:18.048803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.401 [2024-11-15 12:43:18.060179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.401 [2024-11-15 12:43:18.060355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.076072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.076104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.094008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.094197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.103754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.103929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.113518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.113715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.123253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.123425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.132962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.133132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.142793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.142983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.152448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.152659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.162057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.162228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.171979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.172150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.181450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.181644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.191106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.191278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.201003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.201173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.210930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.211117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.220875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.221081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.230414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.230585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.240291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.240461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.250292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.250462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.260203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.260374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.269838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.270025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.283625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.283811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.292207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.292378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.303881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.304053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.313447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.313641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.661 [2024-11-15 12:43:18.323538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.661 [2024-11-15 12:43:18.323738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.335020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.335190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.344665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.344840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.354546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.354764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.364400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.364572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.374557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.374778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.384426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.384642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.394372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.394542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.408975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.409149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.426654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.426868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.435993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.436166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.445854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.445888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.455286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.455463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.469931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.470119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.478986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.479019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.493299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.493507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.502207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.502239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.515133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.515165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.523902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.524093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.534475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.534509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.544420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.544452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.555414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.555445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.564687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.564719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.921 [2024-11-15 12:43:18.580195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.921 [2024-11-15 12:43:18.580228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.589344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.589385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.601600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.601650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.613099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.613131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.621278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.621309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.632746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.632779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.643595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.643673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.651728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.651761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.666121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.666295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.674933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.674980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.684759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.684791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.693991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.694023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.703364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.703395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.713163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.713194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.722904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.722951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.732333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.732508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.742490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.742522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.752171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.752202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.761915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.761946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.771261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.771436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.784885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.784922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.795212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.795248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.808409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.808453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.821290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.821335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.180 [2024-11-15 12:43:18.837424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.180 [2024-11-15 12:43:18.837472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.855078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.855118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.867589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.867698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.880181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.880216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.892003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.892036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.900458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.900491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.910515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.910724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.920693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.920725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.930380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.930554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.940362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.940395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.950601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.950678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 12783.00 IOPS, 99.87 MiB/s [2024-11-15T12:43:19.110Z] [2024-11-15 12:43:18.960359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.960391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.969864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.969897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.979427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.979460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.989185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.989359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:18.999238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:18.999270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.013067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.013100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.021830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.021862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.034913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.034962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.043535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.043567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.057959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.057991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.066393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.066424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.079580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.079671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.088299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.088474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.440 [2024-11-15 12:43:19.102366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.440 [2024-11-15 12:43:19.102538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.112403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.112578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.125607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.125810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.134479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.134665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.149205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.149400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.158437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.158654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.170103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.170274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.185082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.185254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.193837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.194025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.206757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.206932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.223400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.223571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.239776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.239968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.699 [2024-11-15 12:43:19.250899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.699 [2024-11-15 12:43:19.251088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.267069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.267240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.278053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.278223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.294298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.294471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.309886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.310072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.321014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.321200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.329085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.329117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.340491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.340675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.350234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.350267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.700 [2024-11-15 12:43:19.361085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.700 [2024-11-15 12:43:19.361117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.370791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.371003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.383159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.383192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.392153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.392186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.403727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.403759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.415226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.415258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.423501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.423532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.434028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.434206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.444254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.444303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.453883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.453930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.463225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.463256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.472833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.472995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.482635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.482695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.491887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.492078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.501467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.501502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.511089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.959 [2024-11-15 12:43:19.511120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.959 [2024-11-15 12:43:19.524418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.524450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.533035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.533066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.543344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.543375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.552878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.552911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.566474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.566698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.575480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.575512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.585720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.585767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.594971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.595002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.607416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.607448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.960 [2024-11-15 12:43:19.624071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.960 [2024-11-15 12:43:19.624105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.639964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.640138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.649030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.649077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.662998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.663187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.671087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.671118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.686509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.686715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.695369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.695401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.709928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.710103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.718855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.718887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.733143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.733318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.742414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.742446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.756556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.756780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.765643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.765675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.779608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.779667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.788473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.788660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.802077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.802108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.810329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.810361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.820347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.820521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.830037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.830205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.839500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.839702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.849087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.849258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.862913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.863100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.872471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.872677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.220 [2024-11-15 12:43:19.885530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.220 [2024-11-15 12:43:19.885739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:19.898682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.898979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:19.912939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.913170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:19.929062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.929268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:19.941824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.942047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 12837.67 IOPS, 100.29 MiB/s [2024-11-15T12:43:20.150Z] [2024-11-15 12:43:19.955439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.955669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:19.970367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.970542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:19.987190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.987363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:19.996530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:19.996736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.006880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.007078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.018129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.018182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.033978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.034011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.049685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.049750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.058127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.058175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.070272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.070304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.081368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.081428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.090203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.090377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.100839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.100871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.117878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.117911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.127049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.127081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.480 [2024-11-15 12:43:20.138498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.480 [2024-11-15 12:43:20.138531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.150399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.150432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.159171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.159347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.174403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.174575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.183457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.183669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.199969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.200156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.216561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.216781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.232656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.232831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.249911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.250132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.265852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.266027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.276366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.276539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.292818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.292992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.303630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.303820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.319876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.320052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.336864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.337043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.346465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.346663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.360209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.360382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.368572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.368774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.382734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.382912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.392175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.392347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.739 [2024-11-15 12:43:20.406651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.739 [2024-11-15 12:43:20.406874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.416173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.416207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.431677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.431710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.442890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.443082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.451032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.451064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.462937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.462984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.480056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.480088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.491392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.491424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.507576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.998 [2024-11-15 12:43:20.507635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.998 [2024-11-15 12:43:20.518827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.518861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.534621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.534830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.546263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.546436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.561655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.561687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.578821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.578854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.595129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.595162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.606331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.606362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.614275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.614307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.626207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.626239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.637802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.637836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.645963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.645995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.999 [2024-11-15 12:43:20.660243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.999 [2024-11-15 12:43:20.660276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.669382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.669450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.679973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.680005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.696929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.696976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.713438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.713475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.724814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.724846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.733478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.733512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.743683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.743716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.753024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.753056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.767206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.767239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.776307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.776340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.787221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.787253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.795772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.795806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.807153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.807185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.818468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.818690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.833221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.833254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.841866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.841898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.853833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.853866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.865139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.865314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.873962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.873994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.883763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.883796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.893522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.893557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.904449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.904486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.258 [2024-11-15 12:43:20.919539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.258 [2024-11-15 12:43:20.919573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:20.930321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:20.930492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:20.944685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:20.944861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 12899.00 IOPS, 100.77 MiB/s [2024-11-15T12:43:21.188Z] [2024-11-15 12:43:20.954277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:20.954451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:20.968529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:20.968781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:20.979841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:20.980075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:20.994146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:20.994340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.010129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.010331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.021096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.021297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.035560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.035796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.048137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.048339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.062871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.063052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.071241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.071415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.081268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.081466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.090788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.090977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.100435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.100650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.110036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.110208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.119865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.120054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.129835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.129868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.143102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.143135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.151795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.151827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.163518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.163551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.518 [2024-11-15 12:43:21.172934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.518 [2024-11-15 12:43:21.172981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.186909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.186943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.195739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.195771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.210310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.210343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.219090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.219266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.233350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.233572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.242439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.242652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.256044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.256217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.264779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.264973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.279355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.279528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.288147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.288319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.304413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.304587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.315284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.315456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.323277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.323450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.335373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.335544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.346720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.346898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.355342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.355514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.367288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.367462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.383072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.383244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.394013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.394189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.410498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.410684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.427670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.427841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.778 [2024-11-15 12:43:21.439134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.778 [2024-11-15 12:43:21.439307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.454332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.454504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.462898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.463085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.474917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.474981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.486389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.486421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.495065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.495097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.508600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.508677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.517194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.517368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.531950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.532124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.547740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.547772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.558389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.558563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.574205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.574379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.584675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.584850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.601053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.601226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.610680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.610853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.624364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.624537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.633399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.633612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.646655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.646865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.655388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.655561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.665228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.665426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.674386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.674558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.687792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.687966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.038 [2024-11-15 12:43:21.696331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.038 [2024-11-15 12:43:21.696502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.707287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.707501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.719547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.719765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.727984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.728171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.739777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.739987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.748963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.749135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.764304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.764475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.772734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.772908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.787102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.787274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.795352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.795523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.806866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.807057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.816184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.816355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.830872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.831064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.839593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.298 [2024-11-15 12:43:21.839820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.298 [2024-11-15 12:43:21.854695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.854871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 [2024-11-15 12:43:21.863782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.863815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 [2024-11-15 12:43:21.877896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.878088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 [2024-11-15 12:43:21.886927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.886978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 [2024-11-15 12:43:21.901322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.901557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 [2024-11-15 12:43:21.917369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.917445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 [2024-11-15 12:43:21.934557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.934674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 [2024-11-15 12:43:21.944293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.944470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 12887.80 IOPS, 100.69 MiB/s [2024-11-15T12:43:21.969Z] [2024-11-15 12:43:21.955702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.955736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.299 00:08:13.299 Latency(us) 00:08:13.299 [2024-11-15T12:43:21.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.299 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:13.299 Nvme1n1 : 5.01 12890.16 100.70 0.00 0.00 9918.88 3961.95 21567.30 00:08:13.299 [2024-11-15T12:43:21.969Z] =================================================================================================================== 00:08:13.299 [2024-11-15T12:43:21.969Z] Total : 12890.16 100.70 0.00 0.00 9918.88 3961.95 21567.30 00:08:13.299 [2024-11-15 12:43:21.962552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.299 [2024-11-15 12:43:21.962586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.558 [2024-11-15 12:43:21.974509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.558 [2024-11-15 12:43:21.974721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.558 [2024-11-15 12:43:21.982535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.558 [2024-11-15 12:43:21.982569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.558 [2024-11-15 12:43:21.994562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.558 [2024-11-15 12:43:21.994652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.558 [2024-11-15 12:43:22.006542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.558 [2024-11-15 12:43:22.006590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.558 [2024-11-15 12:43:22.018556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.558 [2024-11-15 12:43:22.018646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.558 [2024-11-15 12:43:22.030546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.558 [2024-11-15 12:43:22.030592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.559 [2024-11-15 12:43:22.042541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.559 [2024-11-15 12:43:22.042576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.559 [2024-11-15 12:43:22.054544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.559 [2024-11-15 12:43:22.054806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.559 [2024-11-15 12:43:22.066586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.559 [2024-11-15 12:43:22.066875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.559 [2024-11-15 12:43:22.078563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.559 [2024-11-15 12:43:22.078806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.559 [2024-11-15 12:43:22.090548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.559 [2024-11-15 12:43:22.090806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.559 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65115) - No such process 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65115 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.559 delay0 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.559 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:13.817 [2024-11-15 12:43:22.281997] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:20.462 Initializing NVMe Controllers 00:08:20.462 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.462 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:20.462 Initialization complete. Launching workers. 00:08:20.462 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 340 00:08:20.462 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 627, failed to submit 33 00:08:20.462 success 506, unsuccessful 121, failed 0 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.462 rmmod nvme_tcp 00:08:20.462 rmmod nvme_fabrics 00:08:20.462 rmmod nvme_keyring 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 64978 ']' 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 64978 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 64978 ']' 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 64978 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:20.462 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64978 00:08:20.463 killing process with pid 64978 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64978' 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 64978 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 64978 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:20.463 00:08:20.463 real 0m23.848s 00:08:20.463 user 0m38.739s 00:08:20.463 sys 0m6.814s 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.463 ************************************ 00:08:20.463 END TEST nvmf_zcopy 00:08:20.463 ************************************ 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.463 ************************************ 00:08:20.463 START TEST nvmf_nmic 00:08:20.463 ************************************ 00:08:20.463 12:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:20.463 * Looking for test storage... 00:08:20.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:20.463 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.463 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.463 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.723 --rc genhtml_branch_coverage=1 00:08:20.723 --rc genhtml_function_coverage=1 00:08:20.723 --rc genhtml_legend=1 00:08:20.723 --rc geninfo_all_blocks=1 00:08:20.723 --rc geninfo_unexecuted_blocks=1 00:08:20.723 00:08:20.723 ' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.723 --rc genhtml_branch_coverage=1 00:08:20.723 --rc genhtml_function_coverage=1 00:08:20.723 --rc genhtml_legend=1 00:08:20.723 --rc geninfo_all_blocks=1 00:08:20.723 --rc geninfo_unexecuted_blocks=1 00:08:20.723 00:08:20.723 ' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.723 --rc genhtml_branch_coverage=1 00:08:20.723 --rc genhtml_function_coverage=1 00:08:20.723 --rc genhtml_legend=1 00:08:20.723 --rc geninfo_all_blocks=1 00:08:20.723 --rc geninfo_unexecuted_blocks=1 00:08:20.723 00:08:20.723 ' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.723 --rc genhtml_branch_coverage=1 00:08:20.723 --rc genhtml_function_coverage=1 00:08:20.723 --rc genhtml_legend=1 00:08:20.723 --rc geninfo_all_blocks=1 00:08:20.723 --rc geninfo_unexecuted_blocks=1 00:08:20.723 00:08:20.723 ' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.723 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.724 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:20.724 Cannot find device "nvmf_init_br" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:20.724 Cannot find device "nvmf_init_br2" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:20.724 Cannot find device "nvmf_tgt_br" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:20.724 Cannot find device "nvmf_tgt_br2" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:20.724 Cannot find device "nvmf_init_br" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:20.724 Cannot find device "nvmf_init_br2" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:20.724 Cannot find device "nvmf_tgt_br" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:20.724 Cannot find device "nvmf_tgt_br2" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:20.724 Cannot find device "nvmf_br" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:20.724 Cannot find device "nvmf_init_if" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:20.724 Cannot find device "nvmf_init_if2" 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:20.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:20.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:20.724 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:20.984 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:20.984 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:20.984 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:20.984 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:20.984 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:20.984 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:20.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:20.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:20.985 00:08:20.985 --- 10.0.0.3 ping statistics --- 00:08:20.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.985 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:20.985 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:20.985 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:08:20.985 00:08:20.985 --- 10.0.0.4 ping statistics --- 00:08:20.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.985 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:20.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:20.985 00:08:20.985 --- 10.0.0.1 ping statistics --- 00:08:20.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.985 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:20.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:08:20.985 00:08:20.985 --- 10.0.0.2 ping statistics --- 00:08:20.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.985 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65494 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65494 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65494 ']' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.985 12:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:20.985 [2024-11-15 12:43:29.651421] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:08:20.985 [2024-11-15 12:43:29.651511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.244 [2024-11-15 12:43:29.792860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.244 [2024-11-15 12:43:29.820973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.244 [2024-11-15 12:43:29.821266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.244 [2024-11-15 12:43:29.821302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.244 [2024-11-15 12:43:29.821310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.244 [2024-11-15 12:43:29.821317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.244 [2024-11-15 12:43:29.822170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.244 [2024-11-15 12:43:29.822369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.244 [2024-11-15 12:43:29.823059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.244 [2024-11-15 12:43:29.823116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.244 [2024-11-15 12:43:29.852343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 [2024-11-15 12:43:30.680523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 Malloc0 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 [2024-11-15 12:43:30.737475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:22.182 test case1: single bdev can't be used in multiple subsystems 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 [2024-11-15 12:43:30.761289] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:22.182 [2024-11-15 12:43:30.761509] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:22.182 [2024-11-15 12:43:30.761543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.182 request: 00:08:22.182 { 00:08:22.182 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:22.182 "namespace": { 00:08:22.182 "bdev_name": "Malloc0", 00:08:22.182 "no_auto_visible": false 00:08:22.182 }, 00:08:22.182 "method": "nvmf_subsystem_add_ns", 00:08:22.182 "req_id": 1 00:08:22.182 } 00:08:22.182 Got JSON-RPC error response 00:08:22.182 response: 00:08:22.182 { 00:08:22.182 "code": -32602, 00:08:22.182 "message": "Invalid parameters" 00:08:22.182 } 00:08:22.182 Adding namespace failed - expected result. 00:08:22.182 test case2: host connect to nvmf target in multiple paths 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.182 [2024-11-15 12:43:30.773429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.182 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:22.441 12:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:22.441 12:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.441 12:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:22.441 12:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.441 12:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:22.441 12:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:24.976 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:24.976 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:24.976 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.976 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:24.976 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.976 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:24.976 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:24.976 [global] 00:08:24.976 thread=1 00:08:24.976 invalidate=1 00:08:24.976 rw=write 00:08:24.976 time_based=1 00:08:24.976 runtime=1 00:08:24.976 ioengine=libaio 00:08:24.976 direct=1 00:08:24.976 bs=4096 00:08:24.976 iodepth=1 00:08:24.976 norandommap=0 00:08:24.976 numjobs=1 00:08:24.976 00:08:24.976 verify_dump=1 00:08:24.976 verify_backlog=512 00:08:24.976 verify_state_save=0 00:08:24.976 do_verify=1 00:08:24.976 verify=crc32c-intel 00:08:24.976 [job0] 00:08:24.976 filename=/dev/nvme0n1 00:08:24.976 Could not set queue depth (nvme0n1) 00:08:24.976 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.976 fio-3.35 00:08:24.976 Starting 1 thread 00:08:25.913 00:08:25.913 job0: (groupid=0, jobs=1): err= 0: pid=65591: Fri Nov 15 12:43:34 2024 00:08:25.913 read: IOPS=3163, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:08:25.913 slat (nsec): min=11646, max=86033, avg=15285.58, stdev=5718.52 00:08:25.913 clat (usec): min=117, max=652, avg=159.03, stdev=24.58 00:08:25.913 lat (usec): min=129, max=669, avg=174.32, stdev=25.80 00:08:25.913 clat percentiles (usec): 00:08:25.913 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:08:25.913 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 161], 00:08:25.913 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:08:25.913 | 99.00th=[ 221], 99.50th=[ 237], 99.90th=[ 347], 99.95th=[ 644], 00:08:25.913 | 99.99th=[ 652] 00:08:25.913 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:25.913 slat (nsec): min=13569, max=90419, avg=21383.40, stdev=6450.90 00:08:25.913 clat (usec): min=74, max=747, avg=100.53, stdev=20.96 00:08:25.913 lat (usec): min=91, max=769, avg=121.91, stdev=22.65 00:08:25.913 clat percentiles (usec): 00:08:25.913 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:08:25.913 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 100], 00:08:25.913 | 70.00th=[ 106], 80.00th=[ 113], 90.00th=[ 123], 95.00th=[ 133], 00:08:25.913 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 249], 99.95th=[ 330], 00:08:25.913 | 99.99th=[ 750] 00:08:25.913 bw ( KiB/s): min=15152, max=15152, per=100.00%, avg=15152.00, stdev= 0.00, samples=1 00:08:25.913 iops : min= 3788, max= 3788, avg=3788.00, stdev= 0.00, samples=1 00:08:25.913 lat (usec) : 100=31.54%, 250=68.23%, 500=0.19%, 750=0.04% 00:08:25.913 cpu : usr=1.70%, sys=10.70%, ctx=6751, majf=0, minf=5 00:08:25.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.913 issued rwts: total=3167,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.913 00:08:25.913 Run status group 0 (all jobs): 00:08:25.913 READ: bw=12.4MiB/s (13.0MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=12.4MiB (13.0MB), run=1001-1001msec 00:08:25.913 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:08:25.913 00:08:25.913 Disk stats (read/write): 00:08:25.913 nvme0n1: ios=2999/3072, merge=0/0, ticks=523/365, in_queue=888, util=91.51% 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.913 rmmod nvme_tcp 00:08:25.913 rmmod nvme_fabrics 00:08:25.913 rmmod nvme_keyring 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65494 ']' 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65494 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65494 ']' 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65494 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.913 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65494 00:08:26.172 killing process with pid 65494 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65494' 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65494 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65494 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:26.172 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.431 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:26.431 00:08:26.431 real 0m6.041s 00:08:26.431 user 0m18.707s 00:08:26.431 sys 0m2.307s 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 ************************************ 00:08:26.431 END TEST nvmf_nmic 00:08:26.431 ************************************ 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.431 ************************************ 00:08:26.431 START TEST nvmf_fio_target 00:08:26.431 ************************************ 00:08:26.431 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:26.691 * Looking for test storage... 00:08:26.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.691 --rc genhtml_branch_coverage=1 00:08:26.691 --rc genhtml_function_coverage=1 00:08:26.691 --rc genhtml_legend=1 00:08:26.691 --rc geninfo_all_blocks=1 00:08:26.691 --rc geninfo_unexecuted_blocks=1 00:08:26.691 00:08:26.691 ' 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.691 --rc genhtml_branch_coverage=1 00:08:26.691 --rc genhtml_function_coverage=1 00:08:26.691 --rc genhtml_legend=1 00:08:26.691 --rc geninfo_all_blocks=1 00:08:26.691 --rc geninfo_unexecuted_blocks=1 00:08:26.691 00:08:26.691 ' 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.691 --rc genhtml_branch_coverage=1 00:08:26.691 --rc genhtml_function_coverage=1 00:08:26.691 --rc genhtml_legend=1 00:08:26.691 --rc geninfo_all_blocks=1 00:08:26.691 --rc geninfo_unexecuted_blocks=1 00:08:26.691 00:08:26.691 ' 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.691 --rc genhtml_branch_coverage=1 00:08:26.691 --rc genhtml_function_coverage=1 00:08:26.691 --rc genhtml_legend=1 00:08:26.691 --rc geninfo_all_blocks=1 00:08:26.691 --rc geninfo_unexecuted_blocks=1 00:08:26.691 00:08:26.691 ' 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.691 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.692 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:26.692 Cannot find device "nvmf_init_br" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:26.692 Cannot find device "nvmf_init_br2" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:26.692 Cannot find device "nvmf_tgt_br" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:26.692 Cannot find device "nvmf_tgt_br2" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:26.692 Cannot find device "nvmf_init_br" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:26.692 Cannot find device "nvmf_init_br2" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:26.692 Cannot find device "nvmf_tgt_br" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:26.692 Cannot find device "nvmf_tgt_br2" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:26.692 Cannot find device "nvmf_br" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:26.692 Cannot find device "nvmf_init_if" 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:26.692 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:26.952 Cannot find device "nvmf_init_if2" 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:26.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:26.952 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:26.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:26.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:08:26.952 00:08:26.952 --- 10.0.0.3 ping statistics --- 00:08:26.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.953 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:26.953 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:26.953 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:08:26.953 00:08:26.953 --- 10.0.0.4 ping statistics --- 00:08:26.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.953 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:26.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:08:26.953 00:08:26.953 --- 10.0.0.1 ping statistics --- 00:08:26.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.953 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:26.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:26.953 00:08:26.953 --- 10.0.0.2 ping statistics --- 00:08:26.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.953 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65820 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65820 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 65820 ']' 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.953 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:27.211 [2024-11-15 12:43:35.661096] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:08:27.211 [2024-11-15 12:43:35.661705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.211 [2024-11-15 12:43:35.807053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.211 [2024-11-15 12:43:35.836135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.211 [2024-11-15 12:43:35.836200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.211 [2024-11-15 12:43:35.836226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.211 [2024-11-15 12:43:35.836233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.211 [2024-11-15 12:43:35.836238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.211 [2024-11-15 12:43:35.837054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.211 [2024-11-15 12:43:35.837202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.211 [2024-11-15 12:43:35.837290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.211 [2024-11-15 12:43:35.837293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.211 [2024-11-15 12:43:35.865805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.470 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.470 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:27.470 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.470 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.470 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:27.470 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.470 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:27.729 [2024-11-15 12:43:36.242103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.729 12:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.988 12:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:27.988 12:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:28.247 12:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:28.247 12:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:28.506 12:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:28.506 12:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:28.766 12:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:28.766 12:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:29.026 12:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:29.285 12:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:29.285 12:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:29.544 12:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:29.544 12:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:29.803 12:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:29.803 12:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:30.062 12:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.321 12:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:30.321 12:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.580 12:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:30.580 12:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.839 12:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:30.839 [2024-11-15 12:43:39.498655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:31.098 12:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:31.098 12:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:31.358 12:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:31.617 12:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:31.617 12:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:31.617 12:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.617 12:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:31.617 12:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:31.617 12:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:33.521 12:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:33.521 12:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:33.521 12:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:33.521 12:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:33.521 12:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:33.521 12:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:33.521 12:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:33.521 [global] 00:08:33.521 thread=1 00:08:33.521 invalidate=1 00:08:33.521 rw=write 00:08:33.521 time_based=1 00:08:33.521 runtime=1 00:08:33.522 ioengine=libaio 00:08:33.522 direct=1 00:08:33.522 bs=4096 00:08:33.522 iodepth=1 00:08:33.522 norandommap=0 00:08:33.522 numjobs=1 00:08:33.522 00:08:33.522 verify_dump=1 00:08:33.522 verify_backlog=512 00:08:33.522 verify_state_save=0 00:08:33.522 do_verify=1 00:08:33.522 verify=crc32c-intel 00:08:33.522 [job0] 00:08:33.522 filename=/dev/nvme0n1 00:08:33.522 [job1] 00:08:33.522 filename=/dev/nvme0n2 00:08:33.522 [job2] 00:08:33.522 filename=/dev/nvme0n3 00:08:33.522 [job3] 00:08:33.522 filename=/dev/nvme0n4 00:08:33.780 Could not set queue depth (nvme0n1) 00:08:33.780 Could not set queue depth (nvme0n2) 00:08:33.780 Could not set queue depth (nvme0n3) 00:08:33.780 Could not set queue depth (nvme0n4) 00:08:33.780 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.780 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.780 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.780 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.780 fio-3.35 00:08:33.780 Starting 4 threads 00:08:35.159 00:08:35.159 job0: (groupid=0, jobs=1): err= 0: pid=65992: Fri Nov 15 12:43:43 2024 00:08:35.159 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:35.159 slat (nsec): min=9842, max=58593, avg=11792.25, stdev=3492.05 00:08:35.159 clat (usec): min=229, max=1085, avg=352.89, stdev=77.34 00:08:35.159 lat (usec): min=241, max=1098, avg=364.68, stdev=78.42 00:08:35.159 clat percentiles (usec): 00:08:35.159 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 306], 00:08:35.159 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:08:35.159 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 494], 95.00th=[ 545], 00:08:35.159 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 709], 99.95th=[ 1090], 00:08:35.159 | 99.99th=[ 1090] 00:08:35.159 write: IOPS=1678, BW=6713KiB/s (6874kB/s)(6720KiB/1001msec); 0 zone resets 00:08:35.159 slat (usec): min=12, max=257, avg=19.76, stdev= 8.35 00:08:35.159 clat (usec): min=104, max=553, avg=238.98, stdev=45.90 00:08:35.159 lat (usec): min=130, max=766, avg=258.74, stdev=47.45 00:08:35.159 clat percentiles (usec): 00:08:35.159 | 1.00th=[ 153], 5.00th=[ 172], 10.00th=[ 184], 20.00th=[ 198], 00:08:35.159 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 241], 60.00th=[ 255], 00:08:35.159 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:08:35.159 | 99.00th=[ 330], 99.50th=[ 465], 99.90th=[ 553], 99.95th=[ 553], 00:08:35.159 | 99.99th=[ 553] 00:08:35.159 bw ( KiB/s): min= 8192, max= 8192, per=25.73%, avg=8192.00, stdev= 0.00, samples=1 00:08:35.159 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:35.159 lat (usec) : 250=29.14%, 500=66.23%, 750=4.60% 00:08:35.159 lat (msec) : 2=0.03% 00:08:35.159 cpu : usr=1.20%, sys=4.40%, ctx=3219, majf=0, minf=9 00:08:35.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:35.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.159 issued rwts: total=1536,1680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:35.159 job1: (groupid=0, jobs=1): err= 0: pid=65993: Fri Nov 15 12:43:43 2024 00:08:35.159 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:35.159 slat (nsec): min=15140, max=62720, avg=20108.30, stdev=5393.67 00:08:35.159 clat (usec): min=225, max=1033, avg=343.94, stdev=74.93 00:08:35.159 lat (usec): min=244, max=1058, avg=364.05, stdev=77.77 00:08:35.159 clat percentiles (usec): 00:08:35.159 | 1.00th=[ 249], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:08:35.159 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:08:35.159 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 482], 95.00th=[ 529], 00:08:35.159 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 701], 99.95th=[ 1037], 00:08:35.159 | 99.99th=[ 1037] 00:08:35.159 write: IOPS=1678, BW=6713KiB/s (6874kB/s)(6720KiB/1001msec); 0 zone resets 00:08:35.159 slat (nsec): min=14365, max=74466, avg=25459.09, stdev=5754.75 00:08:35.159 clat (usec): min=125, max=580, avg=232.75, stdev=45.74 00:08:35.159 lat (usec): min=163, max=599, avg=258.20, stdev=46.00 00:08:35.159 clat percentiles (usec): 00:08:35.159 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 190], 00:08:35.159 | 30.00th=[ 202], 40.00th=[ 215], 50.00th=[ 235], 60.00th=[ 249], 00:08:35.159 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:08:35.159 | 99.00th=[ 330], 99.50th=[ 449], 99.90th=[ 529], 99.95th=[ 578], 00:08:35.159 | 99.99th=[ 578] 00:08:35.159 bw ( KiB/s): min= 8192, max= 8192, per=25.73%, avg=8192.00, stdev= 0.00, samples=1 00:08:35.159 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:35.159 lat (usec) : 250=32.15%, 500=63.77%, 750=4.04% 00:08:35.159 lat (msec) : 2=0.03% 00:08:35.159 cpu : usr=1.10%, sys=7.10%, ctx=3216, majf=0, minf=3 00:08:35.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:35.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.159 issued rwts: total=1536,1680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:35.159 job2: (groupid=0, jobs=1): err= 0: pid=65995: Fri Nov 15 12:43:43 2024 00:08:35.159 read: IOPS=1201, BW=4807KiB/s (4923kB/s)(4812KiB/1001msec) 00:08:35.159 slat (nsec): min=15802, max=90236, avg=30694.50, stdev=13229.10 00:08:35.159 clat (usec): min=181, max=762, avg=402.97, stdev=96.01 00:08:35.159 lat (usec): min=202, max=811, avg=433.67, stdev=105.38 00:08:35.159 clat percentiles (usec): 00:08:35.159 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 318], 00:08:35.159 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 367], 60.00th=[ 445], 00:08:35.159 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 519], 95.00th=[ 594], 00:08:35.159 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 725], 99.95th=[ 766], 00:08:35.160 | 99.99th=[ 766] 00:08:35.160 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:35.160 slat (usec): min=21, max=108, avg=36.73, stdev=11.61 00:08:35.160 clat (usec): min=98, max=2569, avg=269.12, stdev=122.06 00:08:35.160 lat (usec): min=124, max=2592, avg=305.84, stdev=128.42 00:08:35.160 clat percentiles (usec): 00:08:35.160 | 1.00th=[ 110], 5.00th=[ 119], 10.00th=[ 126], 20.00th=[ 147], 00:08:35.160 | 30.00th=[ 223], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 281], 00:08:35.160 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 433], 95.00th=[ 465], 00:08:35.160 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 1270], 99.95th=[ 2573], 00:08:35.160 | 99.99th=[ 2573] 00:08:35.160 bw ( KiB/s): min= 5952, max= 5952, per=18.69%, avg=5952.00, stdev= 0.00, samples=1 00:08:35.160 iops : min= 1488, max= 1488, avg=1488.00, stdev= 0.00, samples=1 00:08:35.160 lat (usec) : 100=0.04%, 250=20.85%, 500=71.85%, 750=7.16%, 1000=0.04% 00:08:35.160 lat (msec) : 2=0.04%, 4=0.04% 00:08:35.160 cpu : usr=2.20%, sys=7.40%, ctx=2739, majf=0, minf=13 00:08:35.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:35.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.160 issued rwts: total=1203,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:35.160 job3: (groupid=0, jobs=1): err= 0: pid=66000: Fri Nov 15 12:43:43 2024 00:08:35.160 read: IOPS=2721, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:08:35.160 slat (nsec): min=12071, max=60265, avg=15885.87, stdev=4909.36 00:08:35.160 clat (usec): min=135, max=583, avg=175.82, stdev=22.69 00:08:35.160 lat (usec): min=148, max=597, avg=191.70, stdev=23.65 00:08:35.160 clat percentiles (usec): 00:08:35.160 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:08:35.160 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:08:35.160 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 212], 00:08:35.160 | 99.00th=[ 231], 99.50th=[ 241], 99.90th=[ 273], 99.95th=[ 506], 00:08:35.160 | 99.99th=[ 586] 00:08:35.160 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:35.160 slat (nsec): min=14799, max=99357, avg=22582.44, stdev=6641.87 00:08:35.160 clat (usec): min=94, max=366, avg=129.60, stdev=18.95 00:08:35.160 lat (usec): min=113, max=385, avg=152.18, stdev=20.39 00:08:35.160 clat percentiles (usec): 00:08:35.160 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 111], 20.00th=[ 116], 00:08:35.160 | 30.00th=[ 119], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 133], 00:08:35.160 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 163], 00:08:35.160 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 277], 99.95th=[ 318], 00:08:35.160 | 99.99th=[ 367] 00:08:35.160 bw ( KiB/s): min=12288, max=12288, per=38.59%, avg=12288.00, stdev= 0.00, samples=1 00:08:35.160 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:35.160 lat (usec) : 100=1.00%, 250=98.83%, 500=0.14%, 750=0.03% 00:08:35.160 cpu : usr=3.00%, sys=8.20%, ctx=5796, majf=0, minf=12 00:08:35.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:35.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.160 issued rwts: total=2724,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:35.160 00:08:35.160 Run status group 0 (all jobs): 00:08:35.160 READ: bw=27.3MiB/s (28.6MB/s), 4807KiB/s-10.6MiB/s (4923kB/s-11.1MB/s), io=27.3MiB (28.7MB), run=1001-1001msec 00:08:35.160 WRITE: bw=31.1MiB/s (32.6MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=31.1MiB (32.6MB), run=1001-1001msec 00:08:35.160 00:08:35.160 Disk stats (read/write): 00:08:35.160 nvme0n1: ios=1346/1536, merge=0/0, ticks=432/338, in_queue=770, util=86.97% 00:08:35.160 nvme0n2: ios=1345/1536, merge=0/0, ticks=456/359, in_queue=815, util=88.16% 00:08:35.160 nvme0n3: ios=1024/1195, merge=0/0, ticks=428/382, in_queue=810, util=89.07% 00:08:35.160 nvme0n4: ios=2369/2560, merge=0/0, ticks=446/368, in_queue=814, util=89.73% 00:08:35.160 12:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:35.160 [global] 00:08:35.160 thread=1 00:08:35.160 invalidate=1 00:08:35.160 rw=randwrite 00:08:35.160 time_based=1 00:08:35.160 runtime=1 00:08:35.160 ioengine=libaio 00:08:35.160 direct=1 00:08:35.160 bs=4096 00:08:35.160 iodepth=1 00:08:35.160 norandommap=0 00:08:35.160 numjobs=1 00:08:35.160 00:08:35.160 verify_dump=1 00:08:35.160 verify_backlog=512 00:08:35.160 verify_state_save=0 00:08:35.160 do_verify=1 00:08:35.160 verify=crc32c-intel 00:08:35.160 [job0] 00:08:35.160 filename=/dev/nvme0n1 00:08:35.160 [job1] 00:08:35.160 filename=/dev/nvme0n2 00:08:35.160 [job2] 00:08:35.160 filename=/dev/nvme0n3 00:08:35.160 [job3] 00:08:35.160 filename=/dev/nvme0n4 00:08:35.160 Could not set queue depth (nvme0n1) 00:08:35.160 Could not set queue depth (nvme0n2) 00:08:35.160 Could not set queue depth (nvme0n3) 00:08:35.160 Could not set queue depth (nvme0n4) 00:08:35.160 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.160 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.160 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.160 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.160 fio-3.35 00:08:35.160 Starting 4 threads 00:08:36.538 00:08:36.538 job0: (groupid=0, jobs=1): err= 0: pid=66059: Fri Nov 15 12:43:44 2024 00:08:36.538 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:36.538 slat (nsec): min=11490, max=59907, avg=15762.42, stdev=5041.16 00:08:36.538 clat (usec): min=131, max=306, avg=161.16, stdev=15.73 00:08:36.538 lat (usec): min=144, max=321, avg=176.92, stdev=17.50 00:08:36.538 clat percentiles (usec): 00:08:36.538 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:08:36.538 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:08:36.538 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:08:36.538 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 229], 99.95th=[ 277], 00:08:36.538 | 99.99th=[ 306] 00:08:36.538 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:36.538 slat (nsec): min=14650, max=97690, avg=23424.16, stdev=8358.07 00:08:36.538 clat (usec): min=89, max=439, avg=121.14, stdev=14.75 00:08:36.538 lat (usec): min=107, max=458, avg=144.57, stdev=17.90 00:08:36.538 clat percentiles (usec): 00:08:36.538 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:08:36.538 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 123], 00:08:36.538 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 141], 95.00th=[ 147], 00:08:36.538 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 208], 00:08:36.538 | 99.99th=[ 441] 00:08:36.538 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:08:36.538 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:36.538 lat (usec) : 100=1.40%, 250=98.55%, 500=0.05% 00:08:36.538 cpu : usr=2.40%, sys=10.00%, ctx=6144, majf=0, minf=11 00:08:36.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.538 issued rwts: total=3072,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.538 job1: (groupid=0, jobs=1): err= 0: pid=66060: Fri Nov 15 12:43:44 2024 00:08:36.538 read: IOPS=1811, BW=7245KiB/s (7419kB/s)(7252KiB/1001msec) 00:08:36.538 slat (nsec): min=12427, max=99711, avg=16652.37, stdev=3341.35 00:08:36.538 clat (usec): min=165, max=751, avg=269.03, stdev=46.24 00:08:36.538 lat (usec): min=182, max=766, avg=285.68, stdev=47.09 00:08:36.538 clat percentiles (usec): 00:08:36.538 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:08:36.538 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:08:36.538 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 326], 95.00th=[ 351], 00:08:36.538 | 99.00th=[ 482], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 750], 00:08:36.539 | 99.99th=[ 750] 00:08:36.539 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:36.539 slat (nsec): min=18136, max=87827, avg=27494.59, stdev=7006.88 00:08:36.539 clat (usec): min=95, max=796, avg=203.62, stdev=73.90 00:08:36.539 lat (usec): min=118, max=838, avg=231.11, stdev=76.81 00:08:36.539 clat percentiles (usec): 00:08:36.539 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 118], 20.00th=[ 137], 00:08:36.539 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 202], 00:08:36.539 | 70.00th=[ 215], 80.00th=[ 241], 90.00th=[ 318], 95.00th=[ 367], 00:08:36.539 | 99.00th=[ 404], 99.50th=[ 441], 99.90th=[ 545], 99.95th=[ 603], 00:08:36.539 | 99.99th=[ 799] 00:08:36.539 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:36.539 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:36.539 lat (usec) : 100=0.13%, 250=60.11%, 500=39.34%, 750=0.36%, 1000=0.05% 00:08:36.539 cpu : usr=2.30%, sys=6.60%, ctx=3864, majf=0, minf=9 00:08:36.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.539 issued rwts: total=1813,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.539 job2: (groupid=0, jobs=1): err= 0: pid=66061: Fri Nov 15 12:43:44 2024 00:08:36.539 read: IOPS=1948, BW=7792KiB/s (7979kB/s)(7800KiB/1001msec) 00:08:36.539 slat (nsec): min=12353, max=53080, avg=15769.47, stdev=4514.10 00:08:36.539 clat (usec): min=153, max=1796, avg=281.03, stdev=75.52 00:08:36.539 lat (usec): min=166, max=1820, avg=296.80, stdev=77.90 00:08:36.539 clat percentiles (usec): 00:08:36.539 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:08:36.539 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:08:36.539 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 351], 95.00th=[ 465], 00:08:36.539 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 652], 99.95th=[ 1795], 00:08:36.539 | 99.99th=[ 1795] 00:08:36.539 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:36.539 slat (nsec): min=13873, max=75885, avg=21523.21, stdev=5330.42 00:08:36.539 clat (usec): min=99, max=1536, avg=180.61, stdev=54.34 00:08:36.539 lat (usec): min=117, max=1554, avg=202.13, stdev=55.93 00:08:36.539 clat percentiles (usec): 00:08:36.539 | 1.00th=[ 105], 5.00th=[ 113], 10.00th=[ 119], 20.00th=[ 129], 00:08:36.539 | 30.00th=[ 169], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:08:36.539 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 239], 00:08:36.539 | 99.00th=[ 289], 99.50th=[ 347], 99.90th=[ 474], 99.95th=[ 848], 00:08:36.539 | 99.99th=[ 1532] 00:08:36.539 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:36.539 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:36.539 lat (usec) : 100=0.03%, 250=65.88%, 500=32.89%, 750=1.13%, 1000=0.03% 00:08:36.539 lat (msec) : 2=0.05% 00:08:36.539 cpu : usr=1.70%, sys=5.90%, ctx=3998, majf=0, minf=17 00:08:36.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.539 issued rwts: total=1950,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.539 job3: (groupid=0, jobs=1): err= 0: pid=66062: Fri Nov 15 12:43:44 2024 00:08:36.539 read: IOPS=2575, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec) 00:08:36.539 slat (nsec): min=11408, max=59541, avg=15281.46, stdev=4440.55 00:08:36.539 clat (usec): min=146, max=261, avg=181.64, stdev=16.70 00:08:36.539 lat (usec): min=159, max=277, avg=196.92, stdev=17.79 00:08:36.539 clat percentiles (usec): 00:08:36.539 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:08:36.539 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:08:36.539 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 215], 00:08:36.539 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 253], 99.95th=[ 253], 00:08:36.539 | 99.99th=[ 262] 00:08:36.539 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:36.539 slat (nsec): min=14144, max=97893, avg=22121.73, stdev=7816.41 00:08:36.539 clat (usec): min=101, max=1953, avg=134.90, stdev=36.60 00:08:36.539 lat (usec): min=118, max=1974, avg=157.02, stdev=37.89 00:08:36.539 clat percentiles (usec): 00:08:36.539 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:08:36.539 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:08:36.539 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:08:36.539 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 219], 99.95th=[ 570], 00:08:36.539 | 99.99th=[ 1958] 00:08:36.539 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:08:36.539 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:36.539 lat (usec) : 250=99.89%, 500=0.07%, 750=0.02% 00:08:36.539 lat (msec) : 2=0.02% 00:08:36.539 cpu : usr=2.20%, sys=8.60%, ctx=5654, majf=0, minf=11 00:08:36.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.539 issued rwts: total=2578,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.539 00:08:36.539 Run status group 0 (all jobs): 00:08:36.539 READ: bw=36.7MiB/s (38.5MB/s), 7245KiB/s-12.0MiB/s (7419kB/s-12.6MB/s), io=36.8MiB (38.6MB), run=1001-1001msec 00:08:36.539 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:08:36.539 00:08:36.539 Disk stats (read/write): 00:08:36.539 nvme0n1: ios=2610/2747, merge=0/0, ticks=452/360, in_queue=812, util=87.58% 00:08:36.539 nvme0n2: ios=1585/1721, merge=0/0, ticks=466/393, in_queue=859, util=88.77% 00:08:36.539 nvme0n3: ios=1542/2048, merge=0/0, ticks=437/395, in_queue=832, util=89.15% 00:08:36.539 nvme0n4: ios=2282/2560, merge=0/0, ticks=423/373, in_queue=796, util=89.71% 00:08:36.539 12:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:36.539 [global] 00:08:36.539 thread=1 00:08:36.539 invalidate=1 00:08:36.539 rw=write 00:08:36.539 time_based=1 00:08:36.539 runtime=1 00:08:36.539 ioengine=libaio 00:08:36.539 direct=1 00:08:36.539 bs=4096 00:08:36.539 iodepth=128 00:08:36.539 norandommap=0 00:08:36.539 numjobs=1 00:08:36.539 00:08:36.539 verify_dump=1 00:08:36.539 verify_backlog=512 00:08:36.539 verify_state_save=0 00:08:36.539 do_verify=1 00:08:36.539 verify=crc32c-intel 00:08:36.539 [job0] 00:08:36.539 filename=/dev/nvme0n1 00:08:36.539 [job1] 00:08:36.539 filename=/dev/nvme0n2 00:08:36.539 [job2] 00:08:36.539 filename=/dev/nvme0n3 00:08:36.539 [job3] 00:08:36.539 filename=/dev/nvme0n4 00:08:36.539 Could not set queue depth (nvme0n1) 00:08:36.539 Could not set queue depth (nvme0n2) 00:08:36.539 Could not set queue depth (nvme0n3) 00:08:36.539 Could not set queue depth (nvme0n4) 00:08:36.539 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.539 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.539 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.539 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.539 fio-3.35 00:08:36.539 Starting 4 threads 00:08:37.919 00:08:37.919 job0: (groupid=0, jobs=1): err= 0: pid=66115: Fri Nov 15 12:43:46 2024 00:08:37.919 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:08:37.919 slat (usec): min=5, max=3148, avg=84.00, stdev=325.96 00:08:37.919 clat (usec): min=9099, max=14773, avg=11389.42, stdev=711.99 00:08:37.919 lat (usec): min=9118, max=14828, avg=11473.42, stdev=758.52 00:08:37.919 clat percentiles (usec): 00:08:37.919 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10814], 20.00th=[10945], 00:08:37.919 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:08:37.919 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12387], 95.00th=[12911], 00:08:37.919 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13698], 99.95th=[14222], 00:08:37.919 | 99.99th=[14746] 00:08:37.919 write: IOPS=5792, BW=22.6MiB/s (23.7MB/s)(22.6MiB/1001msec); 0 zone resets 00:08:37.919 slat (usec): min=10, max=4079, avg=82.92, stdev=388.33 00:08:37.919 clat (usec): min=297, max=14859, avg=10765.45, stdev=1020.27 00:08:37.919 lat (usec): min=3196, max=14891, avg=10848.37, stdev=1081.69 00:08:37.919 clat percentiles (usec): 00:08:37.919 | 1.00th=[ 6783], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:08:37.919 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:08:37.919 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11600], 95.00th=[12518], 00:08:37.919 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14353], 99.95th=[14353], 00:08:37.919 | 99.99th=[14877] 00:08:37.919 bw ( KiB/s): min=24095, max=24095, per=36.79%, avg=24095.00, stdev= 0.00, samples=1 00:08:37.919 iops : min= 6023, max= 6023, avg=6023.00, stdev= 0.00, samples=1 00:08:37.919 lat (usec) : 500=0.01% 00:08:37.919 lat (msec) : 4=0.35%, 10=4.93%, 20=94.72% 00:08:37.919 cpu : usr=6.30%, sys=14.50%, ctx=409, majf=0, minf=9 00:08:37.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:37.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.919 issued rwts: total=5632,5798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.919 job1: (groupid=0, jobs=1): err= 0: pid=66116: Fri Nov 15 12:43:46 2024 00:08:37.919 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:08:37.919 slat (usec): min=4, max=6202, avg=187.47, stdev=961.88 00:08:37.919 clat (usec): min=17721, max=28407, avg=24494.38, stdev=1242.81 00:08:37.919 lat (usec): min=22651, max=28423, avg=24681.85, stdev=798.20 00:08:37.919 clat percentiles (usec): 00:08:37.919 | 1.00th=[19006], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:08:37.919 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:08:37.919 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:08:37.919 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:08:37.919 | 99.99th=[28443] 00:08:37.919 write: IOPS=2739, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1005msec); 0 zone resets 00:08:37.919 slat (usec): min=9, max=8559, avg=181.41, stdev=888.15 00:08:37.919 clat (usec): min=2287, max=27012, avg=23061.98, stdev=2363.84 00:08:37.919 lat (usec): min=7275, max=27037, avg=23243.39, stdev=2191.09 00:08:37.919 clat percentiles (usec): 00:08:37.919 | 1.00th=[ 7898], 5.00th=[18744], 10.00th=[22676], 20.00th=[22938], 00:08:37.919 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:08:37.919 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:08:37.919 | 99.00th=[26870], 99.50th=[26870], 99.90th=[26870], 99.95th=[27132], 00:08:37.919 | 99.99th=[27132] 00:08:37.919 bw ( KiB/s): min= 8968, max=12040, per=16.04%, avg=10504.00, stdev=2172.23, samples=2 00:08:37.919 iops : min= 2242, max= 3010, avg=2626.00, stdev=543.06, samples=2 00:08:37.919 lat (msec) : 4=0.02%, 10=0.60%, 20=4.22%, 50=95.16% 00:08:37.919 cpu : usr=1.89%, sys=9.06%, ctx=168, majf=0, minf=15 00:08:37.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:37.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.919 issued rwts: total=2560,2753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.919 job2: (groupid=0, jobs=1): err= 0: pid=66117: Fri Nov 15 12:43:46 2024 00:08:37.919 read: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1001msec) 00:08:37.919 slat (usec): min=5, max=4493, avg=96.88, stdev=382.57 00:08:37.919 clat (usec): min=483, max=18584, avg=12690.20, stdev=1443.88 00:08:37.919 lat (usec): min=2249, max=18600, avg=12787.08, stdev=1473.60 00:08:37.919 clat percentiles (usec): 00:08:37.919 | 1.00th=[ 6718], 5.00th=[10814], 10.00th=[11600], 20.00th=[12256], 00:08:37.919 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:08:37.919 | 70.00th=[12911], 80.00th=[13173], 90.00th=[14222], 95.00th=[14746], 00:08:37.919 | 99.00th=[16450], 99.50th=[16712], 99.90th=[18482], 99.95th=[18482], 00:08:37.919 | 99.99th=[18482] 00:08:37.919 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:08:37.919 slat (usec): min=9, max=5804, avg=95.23, stdev=415.22 00:08:37.919 clat (usec): min=9033, max=21389, avg=12705.42, stdev=1441.54 00:08:37.919 lat (usec): min=9054, max=21406, avg=12800.65, stdev=1488.15 00:08:37.919 clat percentiles (usec): 00:08:37.919 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11600], 20.00th=[11731], 00:08:37.919 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:08:37.919 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14877], 95.00th=[16188], 00:08:37.919 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19268], 99.95th=[20317], 00:08:37.919 | 99.99th=[21365] 00:08:37.919 bw ( KiB/s): min=20480, max=20480, per=31.27%, avg=20480.00, stdev= 0.00, samples=1 00:08:37.919 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:37.919 lat (usec) : 500=0.01% 00:08:37.919 lat (msec) : 4=0.22%, 10=1.20%, 20=98.53%, 50=0.03% 00:08:37.919 cpu : usr=4.70%, sys=14.90%, ctx=470, majf=0, minf=9 00:08:37.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:37.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.919 issued rwts: total=4841,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.919 job3: (groupid=0, jobs=1): err= 0: pid=66118: Fri Nov 15 12:43:46 2024 00:08:37.919 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:08:37.919 slat (usec): min=8, max=6161, avg=187.43, stdev=959.96 00:08:37.919 clat (usec): min=17720, max=27418, avg=24545.74, stdev=1207.72 00:08:37.919 lat (usec): min=22655, max=27433, avg=24733.17, stdev=742.31 00:08:37.919 clat percentiles (usec): 00:08:37.919 | 1.00th=[19006], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:08:37.919 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:08:37.919 | 70.00th=[25035], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:08:37.920 | 99.00th=[27395], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:08:37.920 | 99.99th=[27395] 00:08:37.920 write: IOPS=2776, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1003msec); 0 zone resets 00:08:37.920 slat (usec): min=8, max=6891, avg=180.60, stdev=886.91 00:08:37.920 clat (usec): min=172, max=26716, avg=22776.38, stdev=3065.34 00:08:37.920 lat (usec): min=3077, max=26763, avg=22956.99, stdev=2942.07 00:08:37.920 clat percentiles (usec): 00:08:37.920 | 1.00th=[ 3687], 5.00th=[18220], 10.00th=[22152], 20.00th=[22676], 00:08:37.920 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:08:37.920 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:08:37.920 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:08:37.920 | 99.99th=[26608] 00:08:37.920 bw ( KiB/s): min= 9216, max=12040, per=16.23%, avg=10628.00, stdev=1996.87, samples=2 00:08:37.920 iops : min= 2304, max= 3010, avg=2657.00, stdev=499.22, samples=2 00:08:37.920 lat (usec) : 250=0.02% 00:08:37.920 lat (msec) : 4=0.60%, 10=0.60%, 20=3.89%, 50=94.89% 00:08:37.920 cpu : usr=2.59%, sys=6.89%, ctx=168, majf=0, minf=19 00:08:37.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:37.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.920 issued rwts: total=2560,2785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.920 00:08:37.920 Run status group 0 (all jobs): 00:08:37.920 READ: bw=60.6MiB/s (63.6MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=60.9MiB (63.9MB), run=1001-1005msec 00:08:37.920 WRITE: bw=64.0MiB/s (67.1MB/s), 10.7MiB/s-22.6MiB/s (11.2MB/s-23.7MB/s), io=64.3MiB (67.4MB), run=1001-1005msec 00:08:37.920 00:08:37.920 Disk stats (read/write): 00:08:37.920 nvme0n1: ios=4806/5120, merge=0/0, ticks=16736/15114, in_queue=31850, util=88.47% 00:08:37.920 nvme0n2: ios=2097/2528, merge=0/0, ticks=11792/13473, in_queue=25265, util=89.28% 00:08:37.920 nvme0n3: ios=4096/4480, merge=0/0, ticks=16651/15767, in_queue=32418, util=89.09% 00:08:37.920 nvme0n4: ios=2048/2528, merge=0/0, ticks=10568/11428, in_queue=21996, util=89.56% 00:08:37.920 12:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:37.920 [global] 00:08:37.920 thread=1 00:08:37.920 invalidate=1 00:08:37.920 rw=randwrite 00:08:37.920 time_based=1 00:08:37.920 runtime=1 00:08:37.920 ioengine=libaio 00:08:37.920 direct=1 00:08:37.920 bs=4096 00:08:37.920 iodepth=128 00:08:37.920 norandommap=0 00:08:37.920 numjobs=1 00:08:37.920 00:08:37.920 verify_dump=1 00:08:37.920 verify_backlog=512 00:08:37.920 verify_state_save=0 00:08:37.920 do_verify=1 00:08:37.920 verify=crc32c-intel 00:08:37.920 [job0] 00:08:37.920 filename=/dev/nvme0n1 00:08:37.920 [job1] 00:08:37.920 filename=/dev/nvme0n2 00:08:37.920 [job2] 00:08:37.920 filename=/dev/nvme0n3 00:08:37.920 [job3] 00:08:37.920 filename=/dev/nvme0n4 00:08:37.920 Could not set queue depth (nvme0n1) 00:08:37.920 Could not set queue depth (nvme0n2) 00:08:37.920 Could not set queue depth (nvme0n3) 00:08:37.920 Could not set queue depth (nvme0n4) 00:08:37.920 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:37.920 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:37.920 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:37.920 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:37.920 fio-3.35 00:08:37.920 Starting 4 threads 00:08:39.300 00:08:39.300 job0: (groupid=0, jobs=1): err= 0: pid=66180: Fri Nov 15 12:43:47 2024 00:08:39.300 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:08:39.300 slat (usec): min=5, max=5771, avg=90.77, stdev=562.59 00:08:39.300 clat (usec): min=7603, max=20841, avg=12781.09, stdev=1534.28 00:08:39.300 lat (usec): min=7618, max=24845, avg=12871.86, stdev=1572.09 00:08:39.300 clat percentiles (usec): 00:08:39.300 | 1.00th=[ 8455], 5.00th=[10814], 10.00th=[11338], 20.00th=[11600], 00:08:39.300 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:08:39.300 | 70.00th=[13304], 80.00th=[13435], 90.00th=[14091], 95.00th=[14484], 00:08:39.300 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:08:39.300 | 99.99th=[20841] 00:08:39.300 write: IOPS=5348, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1005msec); 0 zone resets 00:08:39.300 slat (usec): min=10, max=7713, avg=91.84, stdev=533.62 00:08:39.300 clat (usec): min=2163, max=16045, avg=11515.34, stdev=1296.26 00:08:39.300 lat (usec): min=6466, max=16298, avg=11607.18, stdev=1208.58 00:08:39.300 clat percentiles (usec): 00:08:39.300 | 1.00th=[ 7111], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10552], 00:08:39.300 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:08:39.300 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:08:39.300 | 99.00th=[14877], 99.50th=[15401], 99.90th=[16057], 99.95th=[16057], 00:08:39.300 | 99.99th=[16057] 00:08:39.300 bw ( KiB/s): min=19968, max=22008, per=28.22%, avg=20988.00, stdev=1442.50, samples=2 00:08:39.300 iops : min= 4992, max= 5502, avg=5247.00, stdev=360.62, samples=2 00:08:39.300 lat (msec) : 4=0.01%, 10=6.95%, 20=92.59%, 50=0.46% 00:08:39.300 cpu : usr=4.58%, sys=14.84%, ctx=217, majf=0, minf=11 00:08:39.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:39.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.300 issued rwts: total=5120,5375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.300 job1: (groupid=0, jobs=1): err= 0: pid=66181: Fri Nov 15 12:43:47 2024 00:08:39.300 read: IOPS=4491, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1002msec) 00:08:39.300 slat (usec): min=4, max=6104, avg=111.00, stdev=588.03 00:08:39.300 clat (usec): min=1344, max=32156, avg=14278.42, stdev=3887.47 00:08:39.300 lat (usec): min=2589, max=32178, avg=14389.42, stdev=3916.63 00:08:39.300 clat percentiles (usec): 00:08:39.300 | 1.00th=[ 8356], 5.00th=[11207], 10.00th=[12387], 20.00th=[12649], 00:08:39.300 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:08:39.300 | 70.00th=[13566], 80.00th=[14222], 90.00th=[23200], 95.00th=[23987], 00:08:39.300 | 99.00th=[25560], 99.50th=[26084], 99.90th=[28967], 99.95th=[29230], 00:08:39.300 | 99.99th=[32113] 00:08:39.300 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:08:39.300 slat (usec): min=9, max=9978, avg=100.71, stdev=601.37 00:08:39.300 clat (usec): min=3755, max=32712, avg=13423.92, stdev=3949.18 00:08:39.300 lat (usec): min=3781, max=36980, avg=13524.63, stdev=3970.21 00:08:39.300 clat percentiles (usec): 00:08:39.300 | 1.00th=[ 8160], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:08:39.300 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:08:39.300 | 70.00th=[12649], 80.00th=[12911], 90.00th=[22152], 95.00th=[22676], 00:08:39.300 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27395], 99.95th=[28181], 00:08:39.300 | 99.99th=[32637] 00:08:39.300 bw ( KiB/s): min=16384, max=16384, per=22.03%, avg=16384.00, stdev= 0.00, samples=1 00:08:39.300 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:39.300 lat (msec) : 2=0.01%, 4=0.21%, 10=4.03%, 20=82.89%, 50=12.86% 00:08:39.300 cpu : usr=3.20%, sys=13.49%, ctx=292, majf=0, minf=15 00:08:39.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:39.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.300 issued rwts: total=4500,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.300 job2: (groupid=0, jobs=1): err= 0: pid=66182: Fri Nov 15 12:43:47 2024 00:08:39.300 read: IOPS=4258, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1002msec) 00:08:39.300 slat (usec): min=7, max=3775, avg=111.65, stdev=530.79 00:08:39.300 clat (usec): min=1274, max=16858, avg=14629.63, stdev=1534.49 00:08:39.300 lat (usec): min=1288, max=16870, avg=14741.28, stdev=1446.93 00:08:39.300 clat percentiles (usec): 00:08:39.300 | 1.00th=[ 5604], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:08:39.300 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:08:39.300 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15664], 95.00th=[15664], 00:08:39.300 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16909], 99.95th=[16909], 00:08:39.300 | 99.99th=[16909] 00:08:39.300 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:08:39.300 slat (usec): min=12, max=3282, avg=105.12, stdev=455.39 00:08:39.300 clat (usec): min=9620, max=16191, avg=13907.02, stdev=1052.36 00:08:39.300 lat (usec): min=10611, max=16240, avg=14012.14, stdev=952.62 00:08:39.300 clat percentiles (usec): 00:08:39.300 | 1.00th=[11338], 5.00th=[11994], 10.00th=[12256], 20.00th=[12780], 00:08:39.300 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:08:39.300 | 70.00th=[14615], 80.00th=[14615], 90.00th=[14877], 95.00th=[15139], 00:08:39.300 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:08:39.300 | 99.99th=[16188] 00:08:39.300 bw ( KiB/s): min=19464, max=19464, per=26.17%, avg=19464.00, stdev= 0.00, samples=1 00:08:39.300 iops : min= 4866, max= 4866, avg=4866.00, stdev= 0.00, samples=1 00:08:39.300 lat (msec) : 2=0.12%, 10=0.86%, 20=99.02% 00:08:39.300 cpu : usr=4.10%, sys=13.89%, ctx=278, majf=0, minf=9 00:08:39.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:39.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.300 issued rwts: total=4267,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.300 job3: (groupid=0, jobs=1): err= 0: pid=66183: Fri Nov 15 12:43:47 2024 00:08:39.300 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1004msec) 00:08:39.300 slat (usec): min=7, max=8728, avg=122.56, stdev=557.16 00:08:39.300 clat (usec): min=875, max=31863, avg=15750.74, stdev=3556.54 00:08:39.300 lat (usec): min=2979, max=31894, avg=15873.30, stdev=3578.29 00:08:39.300 clat percentiles (usec): 00:08:39.301 | 1.00th=[ 5800], 5.00th=[11863], 10.00th=[13173], 20.00th=[14353], 00:08:39.301 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:08:39.301 | 70.00th=[15270], 80.00th=[16057], 90.00th=[23462], 95.00th=[23987], 00:08:39.301 | 99.00th=[25822], 99.50th=[25822], 99.90th=[31327], 99.95th=[31327], 00:08:39.301 | 99.99th=[31851] 00:08:39.301 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:08:39.301 slat (usec): min=11, max=10349, avg=114.18, stdev=666.66 00:08:39.301 clat (usec): min=6989, max=31237, avg=15215.67, stdev=3478.50 00:08:39.301 lat (usec): min=7316, max=31330, avg=15329.85, stdev=3518.77 00:08:39.301 clat percentiles (usec): 00:08:39.301 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[13042], 20.00th=[13304], 00:08:39.301 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:08:39.301 | 70.00th=[14484], 80.00th=[17171], 90.00th=[21890], 95.00th=[22676], 00:08:39.301 | 99.00th=[25035], 99.50th=[25035], 99.90th=[27657], 99.95th=[30540], 00:08:39.301 | 99.99th=[31327] 00:08:39.301 bw ( KiB/s): min=16384, max=16384, per=22.03%, avg=16384.00, stdev= 0.00, samples=2 00:08:39.301 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:08:39.301 lat (usec) : 1000=0.01% 00:08:39.301 lat (msec) : 4=0.27%, 10=1.06%, 20=85.06%, 50=13.60% 00:08:39.301 cpu : usr=4.49%, sys=11.17%, ctx=313, majf=0, minf=15 00:08:39.301 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:39.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.301 issued rwts: total=4076,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.301 00:08:39.301 Run status group 0 (all jobs): 00:08:39.301 READ: bw=69.8MiB/s (73.2MB/s), 15.9MiB/s-19.9MiB/s (16.6MB/s-20.9MB/s), io=70.2MiB (73.6MB), run=1002-1005msec 00:08:39.301 WRITE: bw=72.6MiB/s (76.2MB/s), 15.9MiB/s-20.9MiB/s (16.7MB/s-21.9MB/s), io=73.0MiB (76.5MB), run=1002-1005msec 00:08:39.301 00:08:39.301 Disk stats (read/write): 00:08:39.301 nvme0n1: ios=4398/4608, merge=0/0, ticks=51857/48875, in_queue=100732, util=87.58% 00:08:39.301 nvme0n2: ios=3617/3977, merge=0/0, ticks=44212/43042, in_queue=87254, util=87.42% 00:08:39.301 nvme0n3: ios=3584/4096, merge=0/0, ticks=11691/12318, in_queue=24009, util=89.25% 00:08:39.301 nvme0n4: ios=3286/3584, merge=0/0, ticks=25852/23647, in_queue=49499, util=89.59% 00:08:39.301 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:39.301 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66196 00:08:39.301 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:39.301 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:39.301 [global] 00:08:39.301 thread=1 00:08:39.301 invalidate=1 00:08:39.301 rw=read 00:08:39.301 time_based=1 00:08:39.301 runtime=10 00:08:39.301 ioengine=libaio 00:08:39.301 direct=1 00:08:39.301 bs=4096 00:08:39.301 iodepth=1 00:08:39.301 norandommap=1 00:08:39.301 numjobs=1 00:08:39.301 00:08:39.301 [job0] 00:08:39.301 filename=/dev/nvme0n1 00:08:39.301 [job1] 00:08:39.301 filename=/dev/nvme0n2 00:08:39.301 [job2] 00:08:39.301 filename=/dev/nvme0n3 00:08:39.301 [job3] 00:08:39.301 filename=/dev/nvme0n4 00:08:39.301 Could not set queue depth (nvme0n1) 00:08:39.301 Could not set queue depth (nvme0n2) 00:08:39.301 Could not set queue depth (nvme0n3) 00:08:39.301 Could not set queue depth (nvme0n4) 00:08:39.301 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.301 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.301 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.301 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:39.301 fio-3.35 00:08:39.301 Starting 4 threads 00:08:42.589 12:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:42.589 fio: pid=66243, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:42.589 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45428736, buflen=4096 00:08:42.589 12:43:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:42.848 fio: pid=66242, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:42.848 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51171328, buflen=4096 00:08:42.848 12:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:42.848 12:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:43.107 fio: pid=66236, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:43.107 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=57561088, buflen=4096 00:08:43.107 12:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:43.107 12:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:43.366 fio: pid=66237, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:43.366 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=62275584, buflen=4096 00:08:43.366 12:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:43.366 12:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:43.366 00:08:43.366 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66236: Fri Nov 15 12:43:51 2024 00:08:43.366 read: IOPS=4002, BW=15.6MiB/s (16.4MB/s)(54.9MiB/3511msec) 00:08:43.366 slat (usec): min=11, max=10354, avg=16.81, stdev=140.03 00:08:43.366 clat (usec): min=124, max=2361, avg=231.58, stdev=53.30 00:08:43.366 lat (usec): min=136, max=10523, avg=248.40, stdev=149.07 00:08:43.366 clat percentiles (usec): 00:08:43.366 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 188], 00:08:43.366 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:08:43.366 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:08:43.366 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 420], 99.95th=[ 570], 00:08:43.366 | 99.99th=[ 2057] 00:08:43.366 bw ( KiB/s): min=14528, max=15600, per=26.87%, avg=15044.00, stdev=371.90, samples=6 00:08:43.366 iops : min= 3632, max= 3900, avg=3761.00, stdev=92.98, samples=6 00:08:43.366 lat (usec) : 250=64.24%, 500=35.68%, 750=0.03%, 1000=0.01% 00:08:43.366 lat (msec) : 2=0.01%, 4=0.01% 00:08:43.366 cpu : usr=0.97%, sys=5.41%, ctx=14059, majf=0, minf=1 00:08:43.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.366 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.366 issued rwts: total=14054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.366 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66237: Fri Nov 15 12:43:51 2024 00:08:43.366 read: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(59.4MiB/3775msec) 00:08:43.366 slat (usec): min=7, max=11639, avg=15.19, stdev=173.66 00:08:43.366 clat (usec): min=37, max=15549, avg=231.94, stdev=136.68 00:08:43.366 lat (usec): min=133, max=15561, avg=247.12, stdev=221.32 00:08:43.366 clat percentiles (usec): 00:08:43.366 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 155], 20.00th=[ 217], 00:08:43.366 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:08:43.366 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:08:43.366 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 469], 99.95th=[ 725], 00:08:43.366 | 99.99th=[ 3097] 00:08:43.366 bw ( KiB/s): min=14792, max=17954, per=28.07%, avg=15715.71, stdev=1041.56, samples=7 00:08:43.366 iops : min= 3698, max= 4488, avg=3928.86, stdev=260.21, samples=7 00:08:43.366 lat (usec) : 50=0.01%, 250=69.40%, 500=30.50%, 750=0.05% 00:08:43.366 lat (msec) : 2=0.03%, 4=0.01%, 20=0.01% 00:08:43.366 cpu : usr=0.82%, sys=4.61%, ctx=15213, majf=0, minf=2 00:08:43.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.366 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.366 issued rwts: total=15205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.366 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66242: Fri Nov 15 12:43:51 2024 00:08:43.366 read: IOPS=3827, BW=15.0MiB/s (15.7MB/s)(48.8MiB/3264msec) 00:08:43.366 slat (usec): min=7, max=9712, avg=13.97, stdev=109.07 00:08:43.366 clat (usec): min=2, max=2968, avg=246.12, stdev=42.22 00:08:43.366 lat (usec): min=152, max=10010, avg=260.08, stdev=117.47 00:08:43.366 clat percentiles (usec): 00:08:43.366 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:08:43.366 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:08:43.366 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:08:43.366 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 412], 99.95th=[ 668], 00:08:43.366 | 99.99th=[ 1909] 00:08:43.366 bw ( KiB/s): min=14992, max=15696, per=27.51%, avg=15405.33, stdev=271.23, samples=6 00:08:43.366 iops : min= 3748, max= 3924, avg=3851.33, stdev=67.81, samples=6 00:08:43.366 lat (usec) : 4=0.01%, 250=64.90%, 500=35.02%, 750=0.03% 00:08:43.366 lat (msec) : 2=0.03%, 4=0.01% 00:08:43.367 cpu : usr=1.13%, sys=4.32%, ctx=12501, majf=0, minf=2 00:08:43.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.367 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.367 issued rwts: total=12494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.367 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66243: Fri Nov 15 12:43:51 2024 00:08:43.367 read: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(43.3MiB/2957msec) 00:08:43.367 slat (nsec): min=11910, max=94399, avg=14957.52, stdev=4444.00 00:08:43.367 clat (usec): min=154, max=2114, avg=250.05, stdev=36.57 00:08:43.367 lat (usec): min=166, max=2128, avg=265.01, stdev=37.06 00:08:43.367 clat percentiles (usec): 00:08:43.367 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 229], 00:08:43.367 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:08:43.367 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:08:43.367 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 375], 99.95th=[ 693], 00:08:43.367 | 99.99th=[ 2073] 00:08:43.367 bw ( KiB/s): min=14592, max=15608, per=26.93%, avg=15080.00, stdev=386.70, samples=5 00:08:43.367 iops : min= 3648, max= 3902, avg=3770.00, stdev=96.67, samples=5 00:08:43.367 lat (usec) : 250=56.06%, 500=43.85%, 750=0.04%, 1000=0.02% 00:08:43.367 lat (msec) : 2=0.01%, 4=0.02% 00:08:43.367 cpu : usr=0.95%, sys=4.97%, ctx=11092, majf=0, minf=1 00:08:43.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.367 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.367 issued rwts: total=11092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.367 00:08:43.367 Run status group 0 (all jobs): 00:08:43.367 READ: bw=54.7MiB/s (57.3MB/s), 14.7MiB/s-15.7MiB/s (15.4MB/s-16.5MB/s), io=206MiB (216MB), run=2957-3775msec 00:08:43.367 00:08:43.367 Disk stats (read/write): 00:08:43.367 nvme0n1: ios=13247/0, merge=0/0, ticks=3232/0, in_queue=3232, util=95.48% 00:08:43.367 nvme0n2: ios=14253/0, merge=0/0, ticks=3290/0, in_queue=3290, util=95.56% 00:08:43.367 nvme0n3: ios=11942/0, merge=0/0, ticks=2882/0, in_queue=2882, util=96.24% 00:08:43.367 nvme0n4: ios=10783/0, merge=0/0, ticks=2788/0, in_queue=2788, util=96.76% 00:08:43.625 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:43.625 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:43.884 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:43.884 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:44.143 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:44.143 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:44.403 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:44.403 12:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:44.403 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:44.403 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66196 00:08:44.403 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:44.403 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.662 nvmf hotplug test: fio failed as expected 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:44.662 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.921 rmmod nvme_tcp 00:08:44.921 rmmod nvme_fabrics 00:08:44.921 rmmod nvme_keyring 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65820 ']' 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65820 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 65820 ']' 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 65820 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65820 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65820' 00:08:44.921 killing process with pid 65820 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 65820 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 65820 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:44.921 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:08:45.180 ************************************ 00:08:45.180 END TEST nvmf_fio_target 00:08:45.180 ************************************ 00:08:45.180 00:08:45.180 real 0m18.745s 00:08:45.180 user 1m10.301s 00:08:45.180 sys 0m10.028s 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.180 12:43:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.440 ************************************ 00:08:45.440 START TEST nvmf_bdevio 00:08:45.440 ************************************ 00:08:45.440 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:45.440 * Looking for test storage... 00:08:45.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.440 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.440 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.440 12:43:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.440 --rc genhtml_branch_coverage=1 00:08:45.440 --rc genhtml_function_coverage=1 00:08:45.440 --rc genhtml_legend=1 00:08:45.440 --rc geninfo_all_blocks=1 00:08:45.440 --rc geninfo_unexecuted_blocks=1 00:08:45.440 00:08:45.440 ' 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.440 --rc genhtml_branch_coverage=1 00:08:45.440 --rc genhtml_function_coverage=1 00:08:45.440 --rc genhtml_legend=1 00:08:45.440 --rc geninfo_all_blocks=1 00:08:45.440 --rc geninfo_unexecuted_blocks=1 00:08:45.440 00:08:45.440 ' 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.440 --rc genhtml_branch_coverage=1 00:08:45.440 --rc genhtml_function_coverage=1 00:08:45.440 --rc genhtml_legend=1 00:08:45.440 --rc geninfo_all_blocks=1 00:08:45.440 --rc geninfo_unexecuted_blocks=1 00:08:45.440 00:08:45.440 ' 00:08:45.440 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.440 --rc genhtml_branch_coverage=1 00:08:45.440 --rc genhtml_function_coverage=1 00:08:45.441 --rc genhtml_legend=1 00:08:45.441 --rc geninfo_all_blocks=1 00:08:45.441 --rc geninfo_unexecuted_blocks=1 00:08:45.441 00:08:45.441 ' 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.441 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:45.441 Cannot find device "nvmf_init_br" 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:45.441 Cannot find device "nvmf_init_br2" 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:08:45.441 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:45.701 Cannot find device "nvmf_tgt_br" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.701 Cannot find device "nvmf_tgt_br2" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:45.701 Cannot find device "nvmf_init_br" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:45.701 Cannot find device "nvmf_init_br2" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:45.701 Cannot find device "nvmf_tgt_br" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:45.701 Cannot find device "nvmf_tgt_br2" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:45.701 Cannot find device "nvmf_br" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:45.701 Cannot find device "nvmf_init_if" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:45.701 Cannot find device "nvmf_init_if2" 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.701 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:45.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:45.961 00:08:45.961 --- 10.0.0.3 ping statistics --- 00:08:45.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.961 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:45.961 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:45.961 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:08:45.961 00:08:45.961 --- 10.0.0.4 ping statistics --- 00:08:45.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.961 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:45.961 00:08:45.961 --- 10.0.0.1 ping statistics --- 00:08:45.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.961 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:45.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:08:45.961 00:08:45.961 --- 10.0.0.2 ping statistics --- 00:08:45.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.961 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66558 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66558 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66558 ']' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.961 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:45.961 [2024-11-15 12:43:54.546031] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:08:45.961 [2024-11-15 12:43:54.546117] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.220 [2024-11-15 12:43:54.691947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.220 [2024-11-15 12:43:54.721017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.220 [2024-11-15 12:43:54.721099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.220 [2024-11-15 12:43:54.721126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.220 [2024-11-15 12:43:54.721133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.220 [2024-11-15 12:43:54.721139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.220 [2024-11-15 12:43:54.722304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.220 [2024-11-15 12:43:54.722452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.220 [2024-11-15 12:43:54.722506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.220 [2024-11-15 12:43:54.722510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.220 [2024-11-15 12:43:54.750736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.220 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.220 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:46.220 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.221 [2024-11-15 12:43:54.845310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.221 Malloc0 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.221 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.480 [2024-11-15 12:43:54.902792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.480 { 00:08:46.480 "params": { 00:08:46.480 "name": "Nvme$subsystem", 00:08:46.480 "trtype": "$TEST_TRANSPORT", 00:08:46.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.480 "adrfam": "ipv4", 00:08:46.480 "trsvcid": "$NVMF_PORT", 00:08:46.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.480 "hdgst": ${hdgst:-false}, 00:08:46.480 "ddgst": ${ddgst:-false} 00:08:46.480 }, 00:08:46.480 "method": "bdev_nvme_attach_controller" 00:08:46.480 } 00:08:46.480 EOF 00:08:46.480 )") 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:46.480 12:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.480 "params": { 00:08:46.480 "name": "Nvme1", 00:08:46.480 "trtype": "tcp", 00:08:46.480 "traddr": "10.0.0.3", 00:08:46.480 "adrfam": "ipv4", 00:08:46.480 "trsvcid": "4420", 00:08:46.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.480 "hdgst": false, 00:08:46.480 "ddgst": false 00:08:46.480 }, 00:08:46.480 "method": "bdev_nvme_attach_controller" 00:08:46.480 }' 00:08:46.480 [2024-11-15 12:43:54.954902] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:08:46.480 [2024-11-15 12:43:54.954983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66587 ] 00:08:46.480 [2024-11-15 12:43:55.103356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.480 [2024-11-15 12:43:55.144654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.480 [2024-11-15 12:43:55.144769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.480 [2024-11-15 12:43:55.144778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.740 [2024-11-15 12:43:55.187021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.740 I/O targets: 00:08:46.740 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:46.740 00:08:46.740 00:08:46.740 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.740 http://cunit.sourceforge.net/ 00:08:46.740 00:08:46.740 00:08:46.740 Suite: bdevio tests on: Nvme1n1 00:08:46.740 Test: blockdev write read block ...passed 00:08:46.740 Test: blockdev write zeroes read block ...passed 00:08:46.740 Test: blockdev write zeroes read no split ...passed 00:08:46.740 Test: blockdev write zeroes read split ...passed 00:08:46.740 Test: blockdev write zeroes read split partial ...passed 00:08:46.740 Test: blockdev reset ...[2024-11-15 12:43:55.319070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:46.740 [2024-11-15 12:43:55.319217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184c180 (9): Bad file descriptor 00:08:46.740 passed 00:08:46.740 Test: blockdev write read 8 blocks ...[2024-11-15 12:43:55.339092] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:46.740 passed 00:08:46.740 Test: blockdev write read size > 128k ...passed 00:08:46.740 Test: blockdev write read invalid size ...passed 00:08:46.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.740 Test: blockdev write read max offset ...passed 00:08:46.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.740 Test: blockdev writev readv 8 blocks ...passed 00:08:46.740 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.740 Test: blockdev writev readv block ...passed 00:08:46.740 Test: blockdev writev readv size > 128k ...passed 00:08:46.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.740 Test: blockdev comparev and writev ...[2024-11-15 12:43:55.346719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.346774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.346795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.346806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:46.740 passed 00:08:46.740 Test: blockdev nvme passthru rw ...[2024-11-15 12:43:55.347270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.347301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.347318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.347327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.347625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.347658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.347675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.347686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.347963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.347979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.348010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:46.740 [2024-11-15 12:43:55.348019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:46.740 passed 00:08:46.740 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.740 Test: blockdev nvme admin passthru ...[2024-11-15 12:43:55.348833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:46.740 [2024-11-15 12:43:55.348858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.348962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:46.740 [2024-11-15 12:43:55.348978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.349091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:46.740 [2024-11-15 12:43:55.349107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:46.740 [2024-11-15 12:43:55.349223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:46.740 [2024-11-15 12:43:55.349238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:46.740 passed 00:08:46.740 Test: blockdev copy ...passed 00:08:46.740 00:08:46.740 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.740 suites 1 1 n/a 0 0 00:08:46.740 tests 23 23 23 0 0 00:08:46.740 asserts 152 152 152 0 n/a 00:08:46.740 00:08:46.740 Elapsed time = 0.160 seconds 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.003 rmmod nvme_tcp 00:08:47.003 rmmod nvme_fabrics 00:08:47.003 rmmod nvme_keyring 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66558 ']' 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66558 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66558 ']' 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66558 00:08:47.003 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:47.004 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.004 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66558 00:08:47.004 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:47.004 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:47.004 killing process with pid 66558 00:08:47.004 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66558' 00:08:47.004 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66558 00:08:47.004 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66558 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:47.263 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:47.522 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:47.522 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.522 12:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:08:47.522 00:08:47.522 real 0m2.186s 00:08:47.522 user 0m5.466s 00:08:47.522 sys 0m0.716s 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:47.522 ************************************ 00:08:47.522 END TEST nvmf_bdevio 00:08:47.522 ************************************ 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:47.522 00:08:47.522 real 2m27.867s 00:08:47.522 user 6m22.910s 00:08:47.522 sys 0m52.573s 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.522 ************************************ 00:08:47.522 END TEST nvmf_target_core 00:08:47.522 ************************************ 00:08:47.522 12:43:56 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:47.522 12:43:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.522 12:43:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.522 12:43:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.522 ************************************ 00:08:47.522 START TEST nvmf_target_extra 00:08:47.522 ************************************ 00:08:47.522 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:47.782 * Looking for test storage... 00:08:47.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:47.782 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.782 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.782 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.782 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.782 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.782 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.782 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.783 --rc genhtml_branch_coverage=1 00:08:47.783 --rc genhtml_function_coverage=1 00:08:47.783 --rc genhtml_legend=1 00:08:47.783 --rc geninfo_all_blocks=1 00:08:47.783 --rc geninfo_unexecuted_blocks=1 00:08:47.783 00:08:47.783 ' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.783 --rc genhtml_branch_coverage=1 00:08:47.783 --rc genhtml_function_coverage=1 00:08:47.783 --rc genhtml_legend=1 00:08:47.783 --rc geninfo_all_blocks=1 00:08:47.783 --rc geninfo_unexecuted_blocks=1 00:08:47.783 00:08:47.783 ' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.783 --rc genhtml_branch_coverage=1 00:08:47.783 --rc genhtml_function_coverage=1 00:08:47.783 --rc genhtml_legend=1 00:08:47.783 --rc geninfo_all_blocks=1 00:08:47.783 --rc geninfo_unexecuted_blocks=1 00:08:47.783 00:08:47.783 ' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.783 --rc genhtml_branch_coverage=1 00:08:47.783 --rc genhtml_function_coverage=1 00:08:47.783 --rc genhtml_legend=1 00:08:47.783 --rc geninfo_all_blocks=1 00:08:47.783 --rc geninfo_unexecuted_blocks=1 00:08:47.783 00:08:47.783 ' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.783 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:47.783 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.784 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.784 12:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:47.784 ************************************ 00:08:47.784 START TEST nvmf_auth_target 00:08:47.784 ************************************ 00:08:47.784 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:47.784 * Looking for test storage... 00:08:47.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.784 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.784 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.784 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.047 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.048 --rc genhtml_branch_coverage=1 00:08:48.048 --rc genhtml_function_coverage=1 00:08:48.048 --rc genhtml_legend=1 00:08:48.048 --rc geninfo_all_blocks=1 00:08:48.048 --rc geninfo_unexecuted_blocks=1 00:08:48.048 00:08:48.048 ' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.048 --rc genhtml_branch_coverage=1 00:08:48.048 --rc genhtml_function_coverage=1 00:08:48.048 --rc genhtml_legend=1 00:08:48.048 --rc geninfo_all_blocks=1 00:08:48.048 --rc geninfo_unexecuted_blocks=1 00:08:48.048 00:08:48.048 ' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.048 --rc genhtml_branch_coverage=1 00:08:48.048 --rc genhtml_function_coverage=1 00:08:48.048 --rc genhtml_legend=1 00:08:48.048 --rc geninfo_all_blocks=1 00:08:48.048 --rc geninfo_unexecuted_blocks=1 00:08:48.048 00:08:48.048 ' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.048 --rc genhtml_branch_coverage=1 00:08:48.048 --rc genhtml_function_coverage=1 00:08:48.048 --rc genhtml_legend=1 00:08:48.048 --rc geninfo_all_blocks=1 00:08:48.048 --rc geninfo_unexecuted_blocks=1 00:08:48.048 00:08:48.048 ' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.048 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:48.048 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:48.049 Cannot find device "nvmf_init_br" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:48.049 Cannot find device "nvmf_init_br2" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:48.049 Cannot find device "nvmf_tgt_br" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.049 Cannot find device "nvmf_tgt_br2" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:48.049 Cannot find device "nvmf_init_br" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:48.049 Cannot find device "nvmf_init_br2" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:48.049 Cannot find device "nvmf_tgt_br" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:48.049 Cannot find device "nvmf_tgt_br2" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:48.049 Cannot find device "nvmf_br" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:48.049 Cannot find device "nvmf_init_if" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:48.049 Cannot find device "nvmf_init_if2" 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:08:48.049 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:48.337 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:48.338 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.338 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:48.338 00:08:48.338 --- 10.0.0.3 ping statistics --- 00:08:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.338 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:48.338 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:48.338 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:08:48.338 00:08:48.338 --- 10.0.0.4 ping statistics --- 00:08:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.338 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:48.338 00:08:48.338 --- 10.0.0.1 ping statistics --- 00:08:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.338 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:48.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:48.338 00:08:48.338 --- 10.0.0.2 ping statistics --- 00:08:48.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.338 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=66871 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 66871 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66871 ']' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.338 12:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66890 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=98d3a900d2d51a871e46d9d01e8379bf52ed3482fba5df13 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wGd 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 98d3a900d2d51a871e46d9d01e8379bf52ed3482fba5df13 0 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 98d3a900d2d51a871e46d9d01e8379bf52ed3482fba5df13 0 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=98d3a900d2d51a871e46d9d01e8379bf52ed3482fba5df13 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wGd 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wGd 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wGd 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=246b28964435d1d117d481489d88b3b4c1b4093ee5436f3b191aa6d4a389f219 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:48.931 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oW0 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 246b28964435d1d117d481489d88b3b4c1b4093ee5436f3b191aa6d4a389f219 3 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 246b28964435d1d117d481489d88b3b4c1b4093ee5436f3b191aa6d4a389f219 3 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=246b28964435d1d117d481489d88b3b4c1b4093ee5436f3b191aa6d4a389f219 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oW0 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oW0 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.oW0 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=45c0aca06888a8e9d0a809e32fb7fc0f 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vBc 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 45c0aca06888a8e9d0a809e32fb7fc0f 1 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 45c0aca06888a8e9d0a809e32fb7fc0f 1 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=45c0aca06888a8e9d0a809e32fb7fc0f 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vBc 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vBc 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vBc 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=af18e035ceddbf74ed48c110c2583196f87105cd0400d2bb 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6lR 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key af18e035ceddbf74ed48c110c2583196f87105cd0400d2bb 2 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 af18e035ceddbf74ed48c110c2583196f87105cd0400d2bb 2 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=af18e035ceddbf74ed48c110c2583196f87105cd0400d2bb 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:48.932 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:49.191 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6lR 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6lR 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.6lR 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=149121744676e9bfcbff627ea95cc90e79e611a8774247b6 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ftR 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 149121744676e9bfcbff627ea95cc90e79e611a8774247b6 2 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 149121744676e9bfcbff627ea95cc90e79e611a8774247b6 2 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=149121744676e9bfcbff627ea95cc90e79e611a8774247b6 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ftR 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ftR 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ftR 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bc14dda8b7c18fc7b314f4f773ba26bd 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.R4a 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bc14dda8b7c18fc7b314f4f773ba26bd 1 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bc14dda8b7c18fc7b314f4f773ba26bd 1 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bc14dda8b7c18fc7b314f4f773ba26bd 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.R4a 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.R4a 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.R4a 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=389aa5e6b52558c35ee8b6d7661a323d589f28905c16d42ef76d642ee91f9131 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6a7 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 389aa5e6b52558c35ee8b6d7661a323d589f28905c16d42ef76d642ee91f9131 3 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 389aa5e6b52558c35ee8b6d7661a323d589f28905c16d42ef76d642ee91f9131 3 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=389aa5e6b52558c35ee8b6d7661a323d589f28905c16d42ef76d642ee91f9131 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6a7 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6a7 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.6a7 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66871 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66871 ']' 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.192 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66890 /var/tmp/host.sock 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66890 ']' 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.759 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wGd 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wGd 00:08:50.017 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wGd 00:08:50.276 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.oW0 ]] 00:08:50.276 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oW0 00:08:50.276 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.276 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.276 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.276 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oW0 00:08:50.276 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oW0 00:08:50.535 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:50.535 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vBc 00:08:50.535 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.535 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.535 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.535 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vBc 00:08:50.535 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vBc 00:08:50.793 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.6lR ]] 00:08:50.793 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6lR 00:08:50.793 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.793 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.793 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.794 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6lR 00:08:50.794 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6lR 00:08:51.052 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:51.052 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ftR 00:08:51.052 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.052 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.052 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.052 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ftR 00:08:51.052 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ftR 00:08:51.311 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.R4a ]] 00:08:51.311 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.R4a 00:08:51.311 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.311 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.311 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.311 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.R4a 00:08:51.311 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.R4a 00:08:51.570 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:51.570 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6a7 00:08:51.570 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.570 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.570 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.570 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6a7 00:08:51.570 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6a7 00:08:51.570 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:08:51.570 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:08:51.570 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:08:51.570 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:51.570 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:51.570 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:51.829 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:52.088 00:08:52.088 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:52.088 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:52.088 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:52.347 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:52.347 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:52.347 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.347 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.347 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.347 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:52.347 { 00:08:52.347 "cntlid": 1, 00:08:52.347 "qid": 0, 00:08:52.347 "state": "enabled", 00:08:52.347 "thread": "nvmf_tgt_poll_group_000", 00:08:52.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:08:52.347 "listen_address": { 00:08:52.347 "trtype": "TCP", 00:08:52.347 "adrfam": "IPv4", 00:08:52.347 "traddr": "10.0.0.3", 00:08:52.347 "trsvcid": "4420" 00:08:52.347 }, 00:08:52.347 "peer_address": { 00:08:52.347 "trtype": "TCP", 00:08:52.347 "adrfam": "IPv4", 00:08:52.347 "traddr": "10.0.0.1", 00:08:52.347 "trsvcid": "33352" 00:08:52.347 }, 00:08:52.347 "auth": { 00:08:52.347 "state": "completed", 00:08:52.347 "digest": "sha256", 00:08:52.347 "dhgroup": "null" 00:08:52.347 } 00:08:52.347 } 00:08:52.347 ]' 00:08:52.347 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:52.605 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:52.605 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:52.605 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:52.605 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:52.605 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:52.605 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:52.605 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:52.862 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:08:52.862 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:08:57.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:57.050 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:57.309 00:08:57.309 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:57.309 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:57.309 12:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:57.568 { 00:08:57.568 "cntlid": 3, 00:08:57.568 "qid": 0, 00:08:57.568 "state": "enabled", 00:08:57.568 "thread": "nvmf_tgt_poll_group_000", 00:08:57.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:08:57.568 "listen_address": { 00:08:57.568 "trtype": "TCP", 00:08:57.568 "adrfam": "IPv4", 00:08:57.568 "traddr": "10.0.0.3", 00:08:57.568 "trsvcid": "4420" 00:08:57.568 }, 00:08:57.568 "peer_address": { 00:08:57.568 "trtype": "TCP", 00:08:57.568 "adrfam": "IPv4", 00:08:57.568 "traddr": "10.0.0.1", 00:08:57.568 "trsvcid": "57334" 00:08:57.568 }, 00:08:57.568 "auth": { 00:08:57.568 "state": "completed", 00:08:57.568 "digest": "sha256", 00:08:57.568 "dhgroup": "null" 00:08:57.568 } 00:08:57.568 } 00:08:57.568 ]' 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:57.568 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:57.827 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:57.827 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:57.827 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:57.827 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:57.827 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:58.086 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:08:58.087 12:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:08:58.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:58.654 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:58.913 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:59.172 00:08:59.431 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:59.431 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:59.431 12:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:59.690 { 00:08:59.690 "cntlid": 5, 00:08:59.690 "qid": 0, 00:08:59.690 "state": "enabled", 00:08:59.690 "thread": "nvmf_tgt_poll_group_000", 00:08:59.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:08:59.690 "listen_address": { 00:08:59.690 "trtype": "TCP", 00:08:59.690 "adrfam": "IPv4", 00:08:59.690 "traddr": "10.0.0.3", 00:08:59.690 "trsvcid": "4420" 00:08:59.690 }, 00:08:59.690 "peer_address": { 00:08:59.690 "trtype": "TCP", 00:08:59.690 "adrfam": "IPv4", 00:08:59.690 "traddr": "10.0.0.1", 00:08:59.690 "trsvcid": "57368" 00:08:59.690 }, 00:08:59.690 "auth": { 00:08:59.690 "state": "completed", 00:08:59.690 "digest": "sha256", 00:08:59.690 "dhgroup": "null" 00:08:59.690 } 00:08:59.690 } 00:08:59.690 ]' 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:59.690 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:59.949 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:08:59.949 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:00.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:00.516 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:00.775 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:01.035 00:09:01.035 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:01.035 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:01.035 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:01.313 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:01.313 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:01.313 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.313 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.313 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.313 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:01.313 { 00:09:01.313 "cntlid": 7, 00:09:01.313 "qid": 0, 00:09:01.313 "state": "enabled", 00:09:01.313 "thread": "nvmf_tgt_poll_group_000", 00:09:01.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:01.313 "listen_address": { 00:09:01.313 "trtype": "TCP", 00:09:01.313 "adrfam": "IPv4", 00:09:01.313 "traddr": "10.0.0.3", 00:09:01.313 "trsvcid": "4420" 00:09:01.313 }, 00:09:01.313 "peer_address": { 00:09:01.313 "trtype": "TCP", 00:09:01.313 "adrfam": "IPv4", 00:09:01.313 "traddr": "10.0.0.1", 00:09:01.313 "trsvcid": "57388" 00:09:01.313 }, 00:09:01.313 "auth": { 00:09:01.313 "state": "completed", 00:09:01.313 "digest": "sha256", 00:09:01.313 "dhgroup": "null" 00:09:01.313 } 00:09:01.313 } 00:09:01.313 ]' 00:09:01.313 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:01.572 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:01.572 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:01.572 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:01.572 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:01.572 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:01.572 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:01.572 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:01.831 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:01.831 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:02.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:02.399 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.658 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.917 00:09:02.917 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:02.917 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:02.917 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:03.175 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:03.176 { 00:09:03.176 "cntlid": 9, 00:09:03.176 "qid": 0, 00:09:03.176 "state": "enabled", 00:09:03.176 "thread": "nvmf_tgt_poll_group_000", 00:09:03.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:03.176 "listen_address": { 00:09:03.176 "trtype": "TCP", 00:09:03.176 "adrfam": "IPv4", 00:09:03.176 "traddr": "10.0.0.3", 00:09:03.176 "trsvcid": "4420" 00:09:03.176 }, 00:09:03.176 "peer_address": { 00:09:03.176 "trtype": "TCP", 00:09:03.176 "adrfam": "IPv4", 00:09:03.176 "traddr": "10.0.0.1", 00:09:03.176 "trsvcid": "57398" 00:09:03.176 }, 00:09:03.176 "auth": { 00:09:03.176 "state": "completed", 00:09:03.176 "digest": "sha256", 00:09:03.176 "dhgroup": "ffdhe2048" 00:09:03.176 } 00:09:03.176 } 00:09:03.176 ]' 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:03.176 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:03.435 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:03.435 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:03.435 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:03.693 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:03.693 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:04.261 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:04.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:04.262 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:04.262 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.262 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.262 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.262 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:04.262 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:04.262 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:04.520 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:04.779 00:09:04.779 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:04.779 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:04.779 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:05.036 { 00:09:05.036 "cntlid": 11, 00:09:05.036 "qid": 0, 00:09:05.036 "state": "enabled", 00:09:05.036 "thread": "nvmf_tgt_poll_group_000", 00:09:05.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:05.036 "listen_address": { 00:09:05.036 "trtype": "TCP", 00:09:05.036 "adrfam": "IPv4", 00:09:05.036 "traddr": "10.0.0.3", 00:09:05.036 "trsvcid": "4420" 00:09:05.036 }, 00:09:05.036 "peer_address": { 00:09:05.036 "trtype": "TCP", 00:09:05.036 "adrfam": "IPv4", 00:09:05.036 "traddr": "10.0.0.1", 00:09:05.036 "trsvcid": "45204" 00:09:05.036 }, 00:09:05.036 "auth": { 00:09:05.036 "state": "completed", 00:09:05.036 "digest": "sha256", 00:09:05.036 "dhgroup": "ffdhe2048" 00:09:05.036 } 00:09:05.036 } 00:09:05.036 ]' 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:05.036 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:05.293 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:05.293 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:05.293 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:05.293 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:05.293 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:05.293 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:05.552 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:05.552 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:06.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:06.119 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:06.378 12:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:06.636 00:09:06.637 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:06.637 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:06.637 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:06.895 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:06.895 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:06.895 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.895 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.895 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:06.895 { 00:09:06.895 "cntlid": 13, 00:09:06.895 "qid": 0, 00:09:06.895 "state": "enabled", 00:09:06.895 "thread": "nvmf_tgt_poll_group_000", 00:09:06.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:06.895 "listen_address": { 00:09:06.895 "trtype": "TCP", 00:09:06.895 "adrfam": "IPv4", 00:09:06.895 "traddr": "10.0.0.3", 00:09:06.895 "trsvcid": "4420" 00:09:06.895 }, 00:09:06.895 "peer_address": { 00:09:06.895 "trtype": "TCP", 00:09:06.895 "adrfam": "IPv4", 00:09:06.895 "traddr": "10.0.0.1", 00:09:06.895 "trsvcid": "45242" 00:09:06.895 }, 00:09:06.895 "auth": { 00:09:06.895 "state": "completed", 00:09:06.895 "digest": "sha256", 00:09:06.895 "dhgroup": "ffdhe2048" 00:09:06.895 } 00:09:06.895 } 00:09:06.895 ]' 00:09:06.895 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:07.154 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:07.154 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:07.154 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:07.154 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:07.154 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:07.154 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:07.154 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:07.412 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:07.412 12:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:07.979 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:07.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:07.980 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:07.980 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.980 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.980 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.980 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:07.980 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:07.980 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:08.239 12:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:08.498 00:09:08.498 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:08.498 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:08.498 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:08.757 { 00:09:08.757 "cntlid": 15, 00:09:08.757 "qid": 0, 00:09:08.757 "state": "enabled", 00:09:08.757 "thread": "nvmf_tgt_poll_group_000", 00:09:08.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:08.757 "listen_address": { 00:09:08.757 "trtype": "TCP", 00:09:08.757 "adrfam": "IPv4", 00:09:08.757 "traddr": "10.0.0.3", 00:09:08.757 "trsvcid": "4420" 00:09:08.757 }, 00:09:08.757 "peer_address": { 00:09:08.757 "trtype": "TCP", 00:09:08.757 "adrfam": "IPv4", 00:09:08.757 "traddr": "10.0.0.1", 00:09:08.757 "trsvcid": "45284" 00:09:08.757 }, 00:09:08.757 "auth": { 00:09:08.757 "state": "completed", 00:09:08.757 "digest": "sha256", 00:09:08.757 "dhgroup": "ffdhe2048" 00:09:08.757 } 00:09:08.757 } 00:09:08.757 ]' 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:08.757 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:09.016 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:09.016 12:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:09.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:09.584 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:09.843 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:10.102 00:09:10.102 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:10.102 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:10.103 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:10.671 { 00:09:10.671 "cntlid": 17, 00:09:10.671 "qid": 0, 00:09:10.671 "state": "enabled", 00:09:10.671 "thread": "nvmf_tgt_poll_group_000", 00:09:10.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:10.671 "listen_address": { 00:09:10.671 "trtype": "TCP", 00:09:10.671 "adrfam": "IPv4", 00:09:10.671 "traddr": "10.0.0.3", 00:09:10.671 "trsvcid": "4420" 00:09:10.671 }, 00:09:10.671 "peer_address": { 00:09:10.671 "trtype": "TCP", 00:09:10.671 "adrfam": "IPv4", 00:09:10.671 "traddr": "10.0.0.1", 00:09:10.671 "trsvcid": "45322" 00:09:10.671 }, 00:09:10.671 "auth": { 00:09:10.671 "state": "completed", 00:09:10.671 "digest": "sha256", 00:09:10.671 "dhgroup": "ffdhe3072" 00:09:10.671 } 00:09:10.671 } 00:09:10.671 ]' 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:10.671 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:10.929 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:10.929 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:11.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:11.497 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.756 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.014 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.014 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:12.014 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:12.014 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:12.272 00:09:12.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:12.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:12.272 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:12.529 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:12.529 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:12.529 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.529 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.529 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.529 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:12.529 { 00:09:12.529 "cntlid": 19, 00:09:12.529 "qid": 0, 00:09:12.529 "state": "enabled", 00:09:12.529 "thread": "nvmf_tgt_poll_group_000", 00:09:12.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:12.529 "listen_address": { 00:09:12.529 "trtype": "TCP", 00:09:12.529 "adrfam": "IPv4", 00:09:12.529 "traddr": "10.0.0.3", 00:09:12.529 "trsvcid": "4420" 00:09:12.529 }, 00:09:12.529 "peer_address": { 00:09:12.530 "trtype": "TCP", 00:09:12.530 "adrfam": "IPv4", 00:09:12.530 "traddr": "10.0.0.1", 00:09:12.530 "trsvcid": "45352" 00:09:12.530 }, 00:09:12.530 "auth": { 00:09:12.530 "state": "completed", 00:09:12.530 "digest": "sha256", 00:09:12.530 "dhgroup": "ffdhe3072" 00:09:12.530 } 00:09:12.530 } 00:09:12.530 ]' 00:09:12.530 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:12.530 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:12.530 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:12.788 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:12.788 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:12.788 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:12.788 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:12.788 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:13.047 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:13.047 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:13.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:13.613 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.872 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.130 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.130 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:14.130 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:14.130 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:14.389 00:09:14.389 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:14.389 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:14.389 12:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:14.647 { 00:09:14.647 "cntlid": 21, 00:09:14.647 "qid": 0, 00:09:14.647 "state": "enabled", 00:09:14.647 "thread": "nvmf_tgt_poll_group_000", 00:09:14.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:14.647 "listen_address": { 00:09:14.647 "trtype": "TCP", 00:09:14.647 "adrfam": "IPv4", 00:09:14.647 "traddr": "10.0.0.3", 00:09:14.647 "trsvcid": "4420" 00:09:14.647 }, 00:09:14.647 "peer_address": { 00:09:14.647 "trtype": "TCP", 00:09:14.647 "adrfam": "IPv4", 00:09:14.647 "traddr": "10.0.0.1", 00:09:14.647 "trsvcid": "36116" 00:09:14.647 }, 00:09:14.647 "auth": { 00:09:14.647 "state": "completed", 00:09:14.647 "digest": "sha256", 00:09:14.647 "dhgroup": "ffdhe3072" 00:09:14.647 } 00:09:14.647 } 00:09:14.647 ]' 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:14.647 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:14.905 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:14.905 12:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:15.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:15.471 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:15.730 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:16.360 00:09:16.360 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:16.360 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:16.360 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:16.625 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:16.625 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:16.625 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.625 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.625 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.625 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:16.625 { 00:09:16.625 "cntlid": 23, 00:09:16.625 "qid": 0, 00:09:16.625 "state": "enabled", 00:09:16.625 "thread": "nvmf_tgt_poll_group_000", 00:09:16.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:16.625 "listen_address": { 00:09:16.626 "trtype": "TCP", 00:09:16.626 "adrfam": "IPv4", 00:09:16.626 "traddr": "10.0.0.3", 00:09:16.626 "trsvcid": "4420" 00:09:16.626 }, 00:09:16.626 "peer_address": { 00:09:16.626 "trtype": "TCP", 00:09:16.626 "adrfam": "IPv4", 00:09:16.626 "traddr": "10.0.0.1", 00:09:16.626 "trsvcid": "36156" 00:09:16.626 }, 00:09:16.626 "auth": { 00:09:16.626 "state": "completed", 00:09:16.626 "digest": "sha256", 00:09:16.626 "dhgroup": "ffdhe3072" 00:09:16.626 } 00:09:16.626 } 00:09:16.626 ]' 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:16.626 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:16.884 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:16.884 12:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:17.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:17.821 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:18.389 00:09:18.389 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:18.389 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:18.389 12:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:18.647 { 00:09:18.647 "cntlid": 25, 00:09:18.647 "qid": 0, 00:09:18.647 "state": "enabled", 00:09:18.647 "thread": "nvmf_tgt_poll_group_000", 00:09:18.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:18.647 "listen_address": { 00:09:18.647 "trtype": "TCP", 00:09:18.647 "adrfam": "IPv4", 00:09:18.647 "traddr": "10.0.0.3", 00:09:18.647 "trsvcid": "4420" 00:09:18.647 }, 00:09:18.647 "peer_address": { 00:09:18.647 "trtype": "TCP", 00:09:18.647 "adrfam": "IPv4", 00:09:18.647 "traddr": "10.0.0.1", 00:09:18.647 "trsvcid": "36192" 00:09:18.647 }, 00:09:18.647 "auth": { 00:09:18.647 "state": "completed", 00:09:18.647 "digest": "sha256", 00:09:18.647 "dhgroup": "ffdhe4096" 00:09:18.647 } 00:09:18.647 } 00:09:18.647 ]' 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:18.647 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:18.648 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:18.906 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:18.906 12:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:19.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:19.846 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:20.414 00:09:20.414 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:20.414 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:20.414 12:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:20.673 { 00:09:20.673 "cntlid": 27, 00:09:20.673 "qid": 0, 00:09:20.673 "state": "enabled", 00:09:20.673 "thread": "nvmf_tgt_poll_group_000", 00:09:20.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:20.673 "listen_address": { 00:09:20.673 "trtype": "TCP", 00:09:20.673 "adrfam": "IPv4", 00:09:20.673 "traddr": "10.0.0.3", 00:09:20.673 "trsvcid": "4420" 00:09:20.673 }, 00:09:20.673 "peer_address": { 00:09:20.673 "trtype": "TCP", 00:09:20.673 "adrfam": "IPv4", 00:09:20.673 "traddr": "10.0.0.1", 00:09:20.673 "trsvcid": "36218" 00:09:20.673 }, 00:09:20.673 "auth": { 00:09:20.673 "state": "completed", 00:09:20.673 "digest": "sha256", 00:09:20.673 "dhgroup": "ffdhe4096" 00:09:20.673 } 00:09:20.673 } 00:09:20.673 ]' 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:20.673 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:20.932 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:20.932 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:21.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:21.500 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.759 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:21.760 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:21.760 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:22.327 00:09:22.327 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:22.327 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:22.327 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:22.586 { 00:09:22.586 "cntlid": 29, 00:09:22.586 "qid": 0, 00:09:22.586 "state": "enabled", 00:09:22.586 "thread": "nvmf_tgt_poll_group_000", 00:09:22.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:22.586 "listen_address": { 00:09:22.586 "trtype": "TCP", 00:09:22.586 "adrfam": "IPv4", 00:09:22.586 "traddr": "10.0.0.3", 00:09:22.586 "trsvcid": "4420" 00:09:22.586 }, 00:09:22.586 "peer_address": { 00:09:22.586 "trtype": "TCP", 00:09:22.586 "adrfam": "IPv4", 00:09:22.586 "traddr": "10.0.0.1", 00:09:22.586 "trsvcid": "36246" 00:09:22.586 }, 00:09:22.586 "auth": { 00:09:22.586 "state": "completed", 00:09:22.586 "digest": "sha256", 00:09:22.586 "dhgroup": "ffdhe4096" 00:09:22.586 } 00:09:22.586 } 00:09:22.586 ]' 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:22.586 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:22.845 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:22.845 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:23.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:23.413 12:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:23.672 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:24.239 00:09:24.239 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:24.239 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:24.239 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:24.239 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:24.239 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:24.240 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.240 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.240 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.240 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:24.240 { 00:09:24.240 "cntlid": 31, 00:09:24.240 "qid": 0, 00:09:24.240 "state": "enabled", 00:09:24.240 "thread": "nvmf_tgt_poll_group_000", 00:09:24.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:24.240 "listen_address": { 00:09:24.240 "trtype": "TCP", 00:09:24.240 "adrfam": "IPv4", 00:09:24.240 "traddr": "10.0.0.3", 00:09:24.240 "trsvcid": "4420" 00:09:24.240 }, 00:09:24.240 "peer_address": { 00:09:24.240 "trtype": "TCP", 00:09:24.240 "adrfam": "IPv4", 00:09:24.240 "traddr": "10.0.0.1", 00:09:24.240 "trsvcid": "37268" 00:09:24.240 }, 00:09:24.240 "auth": { 00:09:24.240 "state": "completed", 00:09:24.240 "digest": "sha256", 00:09:24.240 "dhgroup": "ffdhe4096" 00:09:24.240 } 00:09:24.240 } 00:09:24.240 ]' 00:09:24.240 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:24.499 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:24.499 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:24.499 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:24.499 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:24.499 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:24.499 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:24.499 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:24.758 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:24.758 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:25.326 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:25.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:25.585 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:25.585 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.585 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.585 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.585 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:25.585 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:25.585 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:25.585 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:25.844 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:26.103 00:09:26.103 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:26.103 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:26.103 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:26.362 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:26.362 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:26.362 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.362 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.362 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.362 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:26.362 { 00:09:26.362 "cntlid": 33, 00:09:26.362 "qid": 0, 00:09:26.362 "state": "enabled", 00:09:26.362 "thread": "nvmf_tgt_poll_group_000", 00:09:26.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:26.362 "listen_address": { 00:09:26.362 "trtype": "TCP", 00:09:26.362 "adrfam": "IPv4", 00:09:26.362 "traddr": "10.0.0.3", 00:09:26.362 "trsvcid": "4420" 00:09:26.362 }, 00:09:26.362 "peer_address": { 00:09:26.362 "trtype": "TCP", 00:09:26.362 "adrfam": "IPv4", 00:09:26.362 "traddr": "10.0.0.1", 00:09:26.362 "trsvcid": "37298" 00:09:26.362 }, 00:09:26.362 "auth": { 00:09:26.362 "state": "completed", 00:09:26.362 "digest": "sha256", 00:09:26.362 "dhgroup": "ffdhe6144" 00:09:26.362 } 00:09:26.362 } 00:09:26.362 ]' 00:09:26.362 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:26.362 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:26.621 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:26.621 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:26.621 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:26.621 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:26.621 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:26.621 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:26.880 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:26.880 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:27.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:27.448 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.967 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.967 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:27.967 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:27.967 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:28.226 00:09:28.226 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:28.226 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:28.226 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:28.486 { 00:09:28.486 "cntlid": 35, 00:09:28.486 "qid": 0, 00:09:28.486 "state": "enabled", 00:09:28.486 "thread": "nvmf_tgt_poll_group_000", 00:09:28.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:28.486 "listen_address": { 00:09:28.486 "trtype": "TCP", 00:09:28.486 "adrfam": "IPv4", 00:09:28.486 "traddr": "10.0.0.3", 00:09:28.486 "trsvcid": "4420" 00:09:28.486 }, 00:09:28.486 "peer_address": { 00:09:28.486 "trtype": "TCP", 00:09:28.486 "adrfam": "IPv4", 00:09:28.486 "traddr": "10.0.0.1", 00:09:28.486 "trsvcid": "37326" 00:09:28.486 }, 00:09:28.486 "auth": { 00:09:28.486 "state": "completed", 00:09:28.486 "digest": "sha256", 00:09:28.486 "dhgroup": "ffdhe6144" 00:09:28.486 } 00:09:28.486 } 00:09:28.486 ]' 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:28.486 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:28.745 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:28.745 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:28.745 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:28.745 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:28.745 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:29.005 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:29.005 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:29.573 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:29.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:29.574 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:29.574 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.574 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.574 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.574 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:29.574 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:29.574 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:29.833 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:30.093 00:09:30.093 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:30.093 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:30.093 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:30.352 { 00:09:30.352 "cntlid": 37, 00:09:30.352 "qid": 0, 00:09:30.352 "state": "enabled", 00:09:30.352 "thread": "nvmf_tgt_poll_group_000", 00:09:30.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:30.352 "listen_address": { 00:09:30.352 "trtype": "TCP", 00:09:30.352 "adrfam": "IPv4", 00:09:30.352 "traddr": "10.0.0.3", 00:09:30.352 "trsvcid": "4420" 00:09:30.352 }, 00:09:30.352 "peer_address": { 00:09:30.352 "trtype": "TCP", 00:09:30.352 "adrfam": "IPv4", 00:09:30.352 "traddr": "10.0.0.1", 00:09:30.352 "trsvcid": "37360" 00:09:30.352 }, 00:09:30.352 "auth": { 00:09:30.352 "state": "completed", 00:09:30.352 "digest": "sha256", 00:09:30.352 "dhgroup": "ffdhe6144" 00:09:30.352 } 00:09:30.352 } 00:09:30.352 ]' 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:30.352 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:30.611 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:30.611 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:30.611 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:30.611 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:30.611 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:30.870 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:30.870 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:31.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:31.438 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:31.697 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:31.955 00:09:31.956 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:31.956 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:31.956 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:32.214 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:32.214 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:32.214 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.214 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.214 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.214 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:32.214 { 00:09:32.214 "cntlid": 39, 00:09:32.214 "qid": 0, 00:09:32.214 "state": "enabled", 00:09:32.214 "thread": "nvmf_tgt_poll_group_000", 00:09:32.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:32.214 "listen_address": { 00:09:32.214 "trtype": "TCP", 00:09:32.214 "adrfam": "IPv4", 00:09:32.214 "traddr": "10.0.0.3", 00:09:32.214 "trsvcid": "4420" 00:09:32.214 }, 00:09:32.214 "peer_address": { 00:09:32.214 "trtype": "TCP", 00:09:32.214 "adrfam": "IPv4", 00:09:32.214 "traddr": "10.0.0.1", 00:09:32.214 "trsvcid": "37376" 00:09:32.214 }, 00:09:32.214 "auth": { 00:09:32.214 "state": "completed", 00:09:32.214 "digest": "sha256", 00:09:32.214 "dhgroup": "ffdhe6144" 00:09:32.214 } 00:09:32.214 } 00:09:32.214 ]' 00:09:32.214 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:32.473 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:32.473 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:32.473 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:32.473 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:32.473 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:32.473 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:32.473 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:32.732 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:32.732 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:33.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:33.300 12:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:33.558 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:09:33.558 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:33.558 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.559 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:34.128 00:09:34.386 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:34.386 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:34.387 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:34.645 { 00:09:34.645 "cntlid": 41, 00:09:34.645 "qid": 0, 00:09:34.645 "state": "enabled", 00:09:34.645 "thread": "nvmf_tgt_poll_group_000", 00:09:34.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:34.645 "listen_address": { 00:09:34.645 "trtype": "TCP", 00:09:34.645 "adrfam": "IPv4", 00:09:34.645 "traddr": "10.0.0.3", 00:09:34.645 "trsvcid": "4420" 00:09:34.645 }, 00:09:34.645 "peer_address": { 00:09:34.645 "trtype": "TCP", 00:09:34.645 "adrfam": "IPv4", 00:09:34.645 "traddr": "10.0.0.1", 00:09:34.645 "trsvcid": "35272" 00:09:34.645 }, 00:09:34.645 "auth": { 00:09:34.645 "state": "completed", 00:09:34.645 "digest": "sha256", 00:09:34.645 "dhgroup": "ffdhe8192" 00:09:34.645 } 00:09:34.645 } 00:09:34.645 ]' 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:34.645 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:34.905 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:34.905 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:35.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:35.472 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:35.731 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:36.298 00:09:36.298 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:36.298 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:36.298 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:36.556 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:36.556 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:36.556 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.556 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.557 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.557 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:36.557 { 00:09:36.557 "cntlid": 43, 00:09:36.557 "qid": 0, 00:09:36.557 "state": "enabled", 00:09:36.557 "thread": "nvmf_tgt_poll_group_000", 00:09:36.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:36.557 "listen_address": { 00:09:36.557 "trtype": "TCP", 00:09:36.557 "adrfam": "IPv4", 00:09:36.557 "traddr": "10.0.0.3", 00:09:36.557 "trsvcid": "4420" 00:09:36.557 }, 00:09:36.557 "peer_address": { 00:09:36.557 "trtype": "TCP", 00:09:36.557 "adrfam": "IPv4", 00:09:36.557 "traddr": "10.0.0.1", 00:09:36.557 "trsvcid": "35290" 00:09:36.557 }, 00:09:36.557 "auth": { 00:09:36.557 "state": "completed", 00:09:36.557 "digest": "sha256", 00:09:36.557 "dhgroup": "ffdhe8192" 00:09:36.557 } 00:09:36.557 } 00:09:36.557 ]' 00:09:36.557 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:36.815 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:36.815 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:36.815 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:36.815 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:36.815 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:36.815 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:36.815 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:37.073 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:37.073 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:37.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:37.640 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.207 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.775 00:09:38.775 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:38.775 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:38.775 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:38.775 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:39.034 { 00:09:39.034 "cntlid": 45, 00:09:39.034 "qid": 0, 00:09:39.034 "state": "enabled", 00:09:39.034 "thread": "nvmf_tgt_poll_group_000", 00:09:39.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:39.034 "listen_address": { 00:09:39.034 "trtype": "TCP", 00:09:39.034 "adrfam": "IPv4", 00:09:39.034 "traddr": "10.0.0.3", 00:09:39.034 "trsvcid": "4420" 00:09:39.034 }, 00:09:39.034 "peer_address": { 00:09:39.034 "trtype": "TCP", 00:09:39.034 "adrfam": "IPv4", 00:09:39.034 "traddr": "10.0.0.1", 00:09:39.034 "trsvcid": "35326" 00:09:39.034 }, 00:09:39.034 "auth": { 00:09:39.034 "state": "completed", 00:09:39.034 "digest": "sha256", 00:09:39.034 "dhgroup": "ffdhe8192" 00:09:39.034 } 00:09:39.034 } 00:09:39.034 ]' 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.034 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:39.328 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:39.328 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:39.920 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:40.179 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:40.747 00:09:40.747 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:40.747 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:40.747 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:41.006 { 00:09:41.006 "cntlid": 47, 00:09:41.006 "qid": 0, 00:09:41.006 "state": "enabled", 00:09:41.006 "thread": "nvmf_tgt_poll_group_000", 00:09:41.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:41.006 "listen_address": { 00:09:41.006 "trtype": "TCP", 00:09:41.006 "adrfam": "IPv4", 00:09:41.006 "traddr": "10.0.0.3", 00:09:41.006 "trsvcid": "4420" 00:09:41.006 }, 00:09:41.006 "peer_address": { 00:09:41.006 "trtype": "TCP", 00:09:41.006 "adrfam": "IPv4", 00:09:41.006 "traddr": "10.0.0.1", 00:09:41.006 "trsvcid": "35356" 00:09:41.006 }, 00:09:41.006 "auth": { 00:09:41.006 "state": "completed", 00:09:41.006 "digest": "sha256", 00:09:41.006 "dhgroup": "ffdhe8192" 00:09:41.006 } 00:09:41.006 } 00:09:41.006 ]' 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:41.006 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.575 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:41.575 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:41.834 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:42.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:42.093 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.352 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.612 00:09:42.612 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:42.612 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:42.612 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:42.871 { 00:09:42.871 "cntlid": 49, 00:09:42.871 "qid": 0, 00:09:42.871 "state": "enabled", 00:09:42.871 "thread": "nvmf_tgt_poll_group_000", 00:09:42.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:42.871 "listen_address": { 00:09:42.871 "trtype": "TCP", 00:09:42.871 "adrfam": "IPv4", 00:09:42.871 "traddr": "10.0.0.3", 00:09:42.871 "trsvcid": "4420" 00:09:42.871 }, 00:09:42.871 "peer_address": { 00:09:42.871 "trtype": "TCP", 00:09:42.871 "adrfam": "IPv4", 00:09:42.871 "traddr": "10.0.0.1", 00:09:42.871 "trsvcid": "35398" 00:09:42.871 }, 00:09:42.871 "auth": { 00:09:42.871 "state": "completed", 00:09:42.871 "digest": "sha384", 00:09:42.871 "dhgroup": "null" 00:09:42.871 } 00:09:42.871 } 00:09:42.871 ]' 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:42.871 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.131 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:43.131 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:44.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:44.070 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.071 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.641 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:44.641 { 00:09:44.641 "cntlid": 51, 00:09:44.641 "qid": 0, 00:09:44.641 "state": "enabled", 00:09:44.641 "thread": "nvmf_tgt_poll_group_000", 00:09:44.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:44.641 "listen_address": { 00:09:44.641 "trtype": "TCP", 00:09:44.641 "adrfam": "IPv4", 00:09:44.641 "traddr": "10.0.0.3", 00:09:44.641 "trsvcid": "4420" 00:09:44.641 }, 00:09:44.641 "peer_address": { 00:09:44.641 "trtype": "TCP", 00:09:44.641 "adrfam": "IPv4", 00:09:44.641 "traddr": "10.0.0.1", 00:09:44.641 "trsvcid": "39834" 00:09:44.641 }, 00:09:44.641 "auth": { 00:09:44.641 "state": "completed", 00:09:44.641 "digest": "sha384", 00:09:44.641 "dhgroup": "null" 00:09:44.641 } 00:09:44.641 } 00:09:44.641 ]' 00:09:44.641 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:44.901 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:44.901 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:44.901 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:44.901 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:44.901 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.901 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.901 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:45.160 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:45.160 12:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:45.727 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:45.985 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:09:45.985 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.986 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:46.245 00:09:46.245 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:46.245 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.245 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.505 { 00:09:46.505 "cntlid": 53, 00:09:46.505 "qid": 0, 00:09:46.505 "state": "enabled", 00:09:46.505 "thread": "nvmf_tgt_poll_group_000", 00:09:46.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:46.505 "listen_address": { 00:09:46.505 "trtype": "TCP", 00:09:46.505 "adrfam": "IPv4", 00:09:46.505 "traddr": "10.0.0.3", 00:09:46.505 "trsvcid": "4420" 00:09:46.505 }, 00:09:46.505 "peer_address": { 00:09:46.505 "trtype": "TCP", 00:09:46.505 "adrfam": "IPv4", 00:09:46.505 "traddr": "10.0.0.1", 00:09:46.505 "trsvcid": "39858" 00:09:46.505 }, 00:09:46.505 "auth": { 00:09:46.505 "state": "completed", 00:09:46.505 "digest": "sha384", 00:09:46.505 "dhgroup": "null" 00:09:46.505 } 00:09:46.505 } 00:09:46.505 ]' 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:46.505 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.764 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.764 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.764 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:47.023 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:47.023 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:47.591 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:47.850 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:48.109 00:09:48.109 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:48.109 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:48.109 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:48.368 { 00:09:48.368 "cntlid": 55, 00:09:48.368 "qid": 0, 00:09:48.368 "state": "enabled", 00:09:48.368 "thread": "nvmf_tgt_poll_group_000", 00:09:48.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:48.368 "listen_address": { 00:09:48.368 "trtype": "TCP", 00:09:48.368 "adrfam": "IPv4", 00:09:48.368 "traddr": "10.0.0.3", 00:09:48.368 "trsvcid": "4420" 00:09:48.368 }, 00:09:48.368 "peer_address": { 00:09:48.368 "trtype": "TCP", 00:09:48.368 "adrfam": "IPv4", 00:09:48.368 "traddr": "10.0.0.1", 00:09:48.368 "trsvcid": "39886" 00:09:48.368 }, 00:09:48.368 "auth": { 00:09:48.368 "state": "completed", 00:09:48.368 "digest": "sha384", 00:09:48.368 "dhgroup": "null" 00:09:48.368 } 00:09:48.368 } 00:09:48.368 ]' 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:48.368 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:48.368 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:48.368 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:48.627 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.627 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.627 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.885 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:48.885 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:49.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:49.453 12:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.453 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.712 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.712 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.712 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.712 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.971 00:09:49.971 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.971 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.971 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.230 { 00:09:50.230 "cntlid": 57, 00:09:50.230 "qid": 0, 00:09:50.230 "state": "enabled", 00:09:50.230 "thread": "nvmf_tgt_poll_group_000", 00:09:50.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:50.230 "listen_address": { 00:09:50.230 "trtype": "TCP", 00:09:50.230 "adrfam": "IPv4", 00:09:50.230 "traddr": "10.0.0.3", 00:09:50.230 "trsvcid": "4420" 00:09:50.230 }, 00:09:50.230 "peer_address": { 00:09:50.230 "trtype": "TCP", 00:09:50.230 "adrfam": "IPv4", 00:09:50.230 "traddr": "10.0.0.1", 00:09:50.230 "trsvcid": "39910" 00:09:50.230 }, 00:09:50.230 "auth": { 00:09:50.230 "state": "completed", 00:09:50.230 "digest": "sha384", 00:09:50.230 "dhgroup": "ffdhe2048" 00:09:50.230 } 00:09:50.230 } 00:09:50.230 ]' 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.230 12:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.798 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:50.798 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:51.366 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:51.625 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:51.884 00:09:51.885 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:51.885 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:51.885 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.144 { 00:09:52.144 "cntlid": 59, 00:09:52.144 "qid": 0, 00:09:52.144 "state": "enabled", 00:09:52.144 "thread": "nvmf_tgt_poll_group_000", 00:09:52.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:52.144 "listen_address": { 00:09:52.144 "trtype": "TCP", 00:09:52.144 "adrfam": "IPv4", 00:09:52.144 "traddr": "10.0.0.3", 00:09:52.144 "trsvcid": "4420" 00:09:52.144 }, 00:09:52.144 "peer_address": { 00:09:52.144 "trtype": "TCP", 00:09:52.144 "adrfam": "IPv4", 00:09:52.144 "traddr": "10.0.0.1", 00:09:52.144 "trsvcid": "39926" 00:09:52.144 }, 00:09:52.144 "auth": { 00:09:52.144 "state": "completed", 00:09:52.144 "digest": "sha384", 00:09:52.144 "dhgroup": "ffdhe2048" 00:09:52.144 } 00:09:52.144 } 00:09:52.144 ]' 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:52.144 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.403 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.403 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.403 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:52.663 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:52.663 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:53.231 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.491 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.750 00:09:53.750 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:53.750 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:53.750 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.318 { 00:09:54.318 "cntlid": 61, 00:09:54.318 "qid": 0, 00:09:54.318 "state": "enabled", 00:09:54.318 "thread": "nvmf_tgt_poll_group_000", 00:09:54.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:54.318 "listen_address": { 00:09:54.318 "trtype": "TCP", 00:09:54.318 "adrfam": "IPv4", 00:09:54.318 "traddr": "10.0.0.3", 00:09:54.318 "trsvcid": "4420" 00:09:54.318 }, 00:09:54.318 "peer_address": { 00:09:54.318 "trtype": "TCP", 00:09:54.318 "adrfam": "IPv4", 00:09:54.318 "traddr": "10.0.0.1", 00:09:54.318 "trsvcid": "50618" 00:09:54.318 }, 00:09:54.318 "auth": { 00:09:54.318 "state": "completed", 00:09:54.318 "digest": "sha384", 00:09:54.318 "dhgroup": "ffdhe2048" 00:09:54.318 } 00:09:54.318 } 00:09:54.318 ]' 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.318 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.319 12:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.578 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:54.578 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:55.143 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:55.401 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:55.659 00:09:55.659 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.659 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.659 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:56.234 { 00:09:56.234 "cntlid": 63, 00:09:56.234 "qid": 0, 00:09:56.234 "state": "enabled", 00:09:56.234 "thread": "nvmf_tgt_poll_group_000", 00:09:56.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:56.234 "listen_address": { 00:09:56.234 "trtype": "TCP", 00:09:56.234 "adrfam": "IPv4", 00:09:56.234 "traddr": "10.0.0.3", 00:09:56.234 "trsvcid": "4420" 00:09:56.234 }, 00:09:56.234 "peer_address": { 00:09:56.234 "trtype": "TCP", 00:09:56.234 "adrfam": "IPv4", 00:09:56.234 "traddr": "10.0.0.1", 00:09:56.234 "trsvcid": "50646" 00:09:56.234 }, 00:09:56.234 "auth": { 00:09:56.234 "state": "completed", 00:09:56.234 "digest": "sha384", 00:09:56.234 "dhgroup": "ffdhe2048" 00:09:56.234 } 00:09:56.234 } 00:09:56.234 ]' 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.234 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.502 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:56.502 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:57.070 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.329 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.895 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.895 { 00:09:57.895 "cntlid": 65, 00:09:57.895 "qid": 0, 00:09:57.895 "state": "enabled", 00:09:57.895 "thread": "nvmf_tgt_poll_group_000", 00:09:57.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:09:57.895 "listen_address": { 00:09:57.895 "trtype": "TCP", 00:09:57.895 "adrfam": "IPv4", 00:09:57.895 "traddr": "10.0.0.3", 00:09:57.895 "trsvcid": "4420" 00:09:57.895 }, 00:09:57.895 "peer_address": { 00:09:57.895 "trtype": "TCP", 00:09:57.895 "adrfam": "IPv4", 00:09:57.895 "traddr": "10.0.0.1", 00:09:57.895 "trsvcid": "50668" 00:09:57.895 }, 00:09:57.895 "auth": { 00:09:57.895 "state": "completed", 00:09:57.895 "digest": "sha384", 00:09:57.895 "dhgroup": "ffdhe3072" 00:09:57.895 } 00:09:57.895 } 00:09:57.895 ]' 00:09:57.895 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:58.153 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:58.153 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:58.154 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:58.154 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:58.154 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.154 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.154 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.411 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:58.411 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:58.978 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.237 12:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.496 00:09:59.755 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.755 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.755 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:00.014 { 00:10:00.014 "cntlid": 67, 00:10:00.014 "qid": 0, 00:10:00.014 "state": "enabled", 00:10:00.014 "thread": "nvmf_tgt_poll_group_000", 00:10:00.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:00.014 "listen_address": { 00:10:00.014 "trtype": "TCP", 00:10:00.014 "adrfam": "IPv4", 00:10:00.014 "traddr": "10.0.0.3", 00:10:00.014 "trsvcid": "4420" 00:10:00.014 }, 00:10:00.014 "peer_address": { 00:10:00.014 "trtype": "TCP", 00:10:00.014 "adrfam": "IPv4", 00:10:00.014 "traddr": "10.0.0.1", 00:10:00.014 "trsvcid": "50694" 00:10:00.014 }, 00:10:00.014 "auth": { 00:10:00.014 "state": "completed", 00:10:00.014 "digest": "sha384", 00:10:00.014 "dhgroup": "ffdhe3072" 00:10:00.014 } 00:10:00.014 } 00:10:00.014 ]' 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.014 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.273 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:00.273 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:00.851 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.126 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.402 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.402 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:01.402 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:01.402 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:01.676 00:10:01.676 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.676 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.676 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.940 { 00:10:01.940 "cntlid": 69, 00:10:01.940 "qid": 0, 00:10:01.940 "state": "enabled", 00:10:01.940 "thread": "nvmf_tgt_poll_group_000", 00:10:01.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:01.940 "listen_address": { 00:10:01.940 "trtype": "TCP", 00:10:01.940 "adrfam": "IPv4", 00:10:01.940 "traddr": "10.0.0.3", 00:10:01.940 "trsvcid": "4420" 00:10:01.940 }, 00:10:01.940 "peer_address": { 00:10:01.940 "trtype": "TCP", 00:10:01.940 "adrfam": "IPv4", 00:10:01.940 "traddr": "10.0.0.1", 00:10:01.940 "trsvcid": "50716" 00:10:01.940 }, 00:10:01.940 "auth": { 00:10:01.940 "state": "completed", 00:10:01.940 "digest": "sha384", 00:10:01.940 "dhgroup": "ffdhe3072" 00:10:01.940 } 00:10:01.940 } 00:10:01.940 ]' 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.940 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.199 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:02.199 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:02.766 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:03.025 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:03.285 00:10:03.285 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.285 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.285 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.543 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.543 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.543 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.543 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.826 { 00:10:03.826 "cntlid": 71, 00:10:03.826 "qid": 0, 00:10:03.826 "state": "enabled", 00:10:03.826 "thread": "nvmf_tgt_poll_group_000", 00:10:03.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:03.826 "listen_address": { 00:10:03.826 "trtype": "TCP", 00:10:03.826 "adrfam": "IPv4", 00:10:03.826 "traddr": "10.0.0.3", 00:10:03.826 "trsvcid": "4420" 00:10:03.826 }, 00:10:03.826 "peer_address": { 00:10:03.826 "trtype": "TCP", 00:10:03.826 "adrfam": "IPv4", 00:10:03.826 "traddr": "10.0.0.1", 00:10:03.826 "trsvcid": "50752" 00:10:03.826 }, 00:10:03.826 "auth": { 00:10:03.826 "state": "completed", 00:10:03.826 "digest": "sha384", 00:10:03.826 "dhgroup": "ffdhe3072" 00:10:03.826 } 00:10:03.826 } 00:10:03.826 ]' 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.826 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.104 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:04.104 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:04.672 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.931 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:05.190 00:10:05.190 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.190 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.190 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.449 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.449 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.449 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.449 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.449 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.449 { 00:10:05.449 "cntlid": 73, 00:10:05.449 "qid": 0, 00:10:05.449 "state": "enabled", 00:10:05.449 "thread": "nvmf_tgt_poll_group_000", 00:10:05.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:05.450 "listen_address": { 00:10:05.450 "trtype": "TCP", 00:10:05.450 "adrfam": "IPv4", 00:10:05.450 "traddr": "10.0.0.3", 00:10:05.450 "trsvcid": "4420" 00:10:05.450 }, 00:10:05.450 "peer_address": { 00:10:05.450 "trtype": "TCP", 00:10:05.450 "adrfam": "IPv4", 00:10:05.450 "traddr": "10.0.0.1", 00:10:05.450 "trsvcid": "33946" 00:10:05.450 }, 00:10:05.450 "auth": { 00:10:05.450 "state": "completed", 00:10:05.450 "digest": "sha384", 00:10:05.450 "dhgroup": "ffdhe4096" 00:10:05.450 } 00:10:05.450 } 00:10:05.450 ]' 00:10:05.450 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.450 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:05.450 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:05.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.968 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:05.968 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:06.540 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.799 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:07.058 00:10:07.058 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.058 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.058 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.317 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.317 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.317 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.317 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.317 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.318 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.318 { 00:10:07.318 "cntlid": 75, 00:10:07.318 "qid": 0, 00:10:07.318 "state": "enabled", 00:10:07.318 "thread": "nvmf_tgt_poll_group_000", 00:10:07.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:07.318 "listen_address": { 00:10:07.318 "trtype": "TCP", 00:10:07.318 "adrfam": "IPv4", 00:10:07.318 "traddr": "10.0.0.3", 00:10:07.318 "trsvcid": "4420" 00:10:07.318 }, 00:10:07.318 "peer_address": { 00:10:07.318 "trtype": "TCP", 00:10:07.318 "adrfam": "IPv4", 00:10:07.318 "traddr": "10.0.0.1", 00:10:07.318 "trsvcid": "33968" 00:10:07.318 }, 00:10:07.318 "auth": { 00:10:07.318 "state": "completed", 00:10:07.318 "digest": "sha384", 00:10:07.318 "dhgroup": "ffdhe4096" 00:10:07.318 } 00:10:07.318 } 00:10:07.318 ]' 00:10:07.318 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.576 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:07.576 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.576 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:07.576 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.576 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.576 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.577 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.835 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:07.835 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:08.404 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.663 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.922 00:10:09.181 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.181 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.182 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.441 { 00:10:09.441 "cntlid": 77, 00:10:09.441 "qid": 0, 00:10:09.441 "state": "enabled", 00:10:09.441 "thread": "nvmf_tgt_poll_group_000", 00:10:09.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:09.441 "listen_address": { 00:10:09.441 "trtype": "TCP", 00:10:09.441 "adrfam": "IPv4", 00:10:09.441 "traddr": "10.0.0.3", 00:10:09.441 "trsvcid": "4420" 00:10:09.441 }, 00:10:09.441 "peer_address": { 00:10:09.441 "trtype": "TCP", 00:10:09.441 "adrfam": "IPv4", 00:10:09.441 "traddr": "10.0.0.1", 00:10:09.441 "trsvcid": "33994" 00:10:09.441 }, 00:10:09.441 "auth": { 00:10:09.441 "state": "completed", 00:10:09.441 "digest": "sha384", 00:10:09.441 "dhgroup": "ffdhe4096" 00:10:09.441 } 00:10:09.441 } 00:10:09.441 ]' 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:09.441 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:09.441 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.441 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.441 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.700 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:09.700 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:10.269 12:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.528 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:11.095 00:10:11.095 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.095 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.095 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.356 { 00:10:11.356 "cntlid": 79, 00:10:11.356 "qid": 0, 00:10:11.356 "state": "enabled", 00:10:11.356 "thread": "nvmf_tgt_poll_group_000", 00:10:11.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:11.356 "listen_address": { 00:10:11.356 "trtype": "TCP", 00:10:11.356 "adrfam": "IPv4", 00:10:11.356 "traddr": "10.0.0.3", 00:10:11.356 "trsvcid": "4420" 00:10:11.356 }, 00:10:11.356 "peer_address": { 00:10:11.356 "trtype": "TCP", 00:10:11.356 "adrfam": "IPv4", 00:10:11.356 "traddr": "10.0.0.1", 00:10:11.356 "trsvcid": "34030" 00:10:11.356 }, 00:10:11.356 "auth": { 00:10:11.356 "state": "completed", 00:10:11.356 "digest": "sha384", 00:10:11.356 "dhgroup": "ffdhe4096" 00:10:11.356 } 00:10:11.356 } 00:10:11.356 ]' 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.356 12:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.616 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:11.616 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:12.552 12:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.552 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:13.119 00:10:13.119 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.119 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.119 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.377 { 00:10:13.377 "cntlid": 81, 00:10:13.377 "qid": 0, 00:10:13.377 "state": "enabled", 00:10:13.377 "thread": "nvmf_tgt_poll_group_000", 00:10:13.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:13.377 "listen_address": { 00:10:13.377 "trtype": "TCP", 00:10:13.377 "adrfam": "IPv4", 00:10:13.377 "traddr": "10.0.0.3", 00:10:13.377 "trsvcid": "4420" 00:10:13.377 }, 00:10:13.377 "peer_address": { 00:10:13.377 "trtype": "TCP", 00:10:13.377 "adrfam": "IPv4", 00:10:13.377 "traddr": "10.0.0.1", 00:10:13.377 "trsvcid": "34048" 00:10:13.377 }, 00:10:13.377 "auth": { 00:10:13.377 "state": "completed", 00:10:13.377 "digest": "sha384", 00:10:13.377 "dhgroup": "ffdhe6144" 00:10:13.377 } 00:10:13.377 } 00:10:13.377 ]' 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:13.377 12:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.377 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.377 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.377 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.636 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:13.636 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:14.572 12:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.572 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.139 00:10:15.139 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.139 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.139 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.398 { 00:10:15.398 "cntlid": 83, 00:10:15.398 "qid": 0, 00:10:15.398 "state": "enabled", 00:10:15.398 "thread": "nvmf_tgt_poll_group_000", 00:10:15.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:15.398 "listen_address": { 00:10:15.398 "trtype": "TCP", 00:10:15.398 "adrfam": "IPv4", 00:10:15.398 "traddr": "10.0.0.3", 00:10:15.398 "trsvcid": "4420" 00:10:15.398 }, 00:10:15.398 "peer_address": { 00:10:15.398 "trtype": "TCP", 00:10:15.398 "adrfam": "IPv4", 00:10:15.398 "traddr": "10.0.0.1", 00:10:15.398 "trsvcid": "48024" 00:10:15.398 }, 00:10:15.398 "auth": { 00:10:15.398 "state": "completed", 00:10:15.398 "digest": "sha384", 00:10:15.398 "dhgroup": "ffdhe6144" 00:10:15.398 } 00:10:15.398 } 00:10:15.398 ]' 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:15.398 12:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.398 12:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:15.398 12:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.657 12:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.657 12:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.657 12:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.915 12:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:15.915 12:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:16.482 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.741 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.307 00:10:17.307 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.308 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.308 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.566 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.566 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.566 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.566 12:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.566 { 00:10:17.566 "cntlid": 85, 00:10:17.566 "qid": 0, 00:10:17.566 "state": "enabled", 00:10:17.566 "thread": "nvmf_tgt_poll_group_000", 00:10:17.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:17.566 "listen_address": { 00:10:17.566 "trtype": "TCP", 00:10:17.566 "adrfam": "IPv4", 00:10:17.566 "traddr": "10.0.0.3", 00:10:17.566 "trsvcid": "4420" 00:10:17.566 }, 00:10:17.566 "peer_address": { 00:10:17.566 "trtype": "TCP", 00:10:17.566 "adrfam": "IPv4", 00:10:17.566 "traddr": "10.0.0.1", 00:10:17.566 "trsvcid": "48058" 00:10:17.566 }, 00:10:17.566 "auth": { 00:10:17.566 "state": "completed", 00:10:17.566 "digest": "sha384", 00:10:17.566 "dhgroup": "ffdhe6144" 00:10:17.566 } 00:10:17.566 } 00:10:17.566 ]' 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.566 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.825 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:17.825 12:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:18.391 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.649 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:18.649 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.649 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.649 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.649 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.649 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:18.649 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:18.908 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:19.167 00:10:19.167 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.167 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.167 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.425 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.425 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.425 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.425 12:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.425 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.425 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.425 { 00:10:19.425 "cntlid": 87, 00:10:19.425 "qid": 0, 00:10:19.425 "state": "enabled", 00:10:19.425 "thread": "nvmf_tgt_poll_group_000", 00:10:19.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:19.425 "listen_address": { 00:10:19.425 "trtype": "TCP", 00:10:19.425 "adrfam": "IPv4", 00:10:19.425 "traddr": "10.0.0.3", 00:10:19.425 "trsvcid": "4420" 00:10:19.425 }, 00:10:19.425 "peer_address": { 00:10:19.425 "trtype": "TCP", 00:10:19.425 "adrfam": "IPv4", 00:10:19.425 "traddr": "10.0.0.1", 00:10:19.425 "trsvcid": "48088" 00:10:19.425 }, 00:10:19.425 "auth": { 00:10:19.425 "state": "completed", 00:10:19.425 "digest": "sha384", 00:10:19.425 "dhgroup": "ffdhe6144" 00:10:19.425 } 00:10:19.425 } 00:10:19.425 ]' 00:10:19.425 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.425 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:19.425 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.684 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:19.684 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.684 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.684 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.684 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.942 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:19.942 12:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:20.508 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.508 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:20.509 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.509 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.509 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.509 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:20.509 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.509 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:20.509 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.767 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.333 00:10:21.333 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.333 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.333 12:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.591 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.591 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.591 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.591 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.591 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.591 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.591 { 00:10:21.591 "cntlid": 89, 00:10:21.591 "qid": 0, 00:10:21.591 "state": "enabled", 00:10:21.591 "thread": "nvmf_tgt_poll_group_000", 00:10:21.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:21.591 "listen_address": { 00:10:21.591 "trtype": "TCP", 00:10:21.591 "adrfam": "IPv4", 00:10:21.591 "traddr": "10.0.0.3", 00:10:21.591 "trsvcid": "4420" 00:10:21.591 }, 00:10:21.591 "peer_address": { 00:10:21.591 "trtype": "TCP", 00:10:21.591 "adrfam": "IPv4", 00:10:21.591 "traddr": "10.0.0.1", 00:10:21.591 "trsvcid": "48120" 00:10:21.591 }, 00:10:21.591 "auth": { 00:10:21.591 "state": "completed", 00:10:21.591 "digest": "sha384", 00:10:21.591 "dhgroup": "ffdhe8192" 00:10:21.591 } 00:10:21.591 } 00:10:21.591 ]' 00:10:21.591 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.849 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:21.849 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.849 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:21.849 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.849 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.849 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.849 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.108 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:22.108 12:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:22.675 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:22.933 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:10:22.933 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.933 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.934 12:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.501 00:10:23.501 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.501 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.501 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.759 { 00:10:23.759 "cntlid": 91, 00:10:23.759 "qid": 0, 00:10:23.759 "state": "enabled", 00:10:23.759 "thread": "nvmf_tgt_poll_group_000", 00:10:23.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:23.759 "listen_address": { 00:10:23.759 "trtype": "TCP", 00:10:23.759 "adrfam": "IPv4", 00:10:23.759 "traddr": "10.0.0.3", 00:10:23.759 "trsvcid": "4420" 00:10:23.759 }, 00:10:23.759 "peer_address": { 00:10:23.759 "trtype": "TCP", 00:10:23.759 "adrfam": "IPv4", 00:10:23.759 "traddr": "10.0.0.1", 00:10:23.759 "trsvcid": "48150" 00:10:23.759 }, 00:10:23.759 "auth": { 00:10:23.759 "state": "completed", 00:10:23.759 "digest": "sha384", 00:10:23.759 "dhgroup": "ffdhe8192" 00:10:23.759 } 00:10:23.759 } 00:10:23.759 ]' 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:23.759 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.018 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:24.018 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.018 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.018 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.018 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.276 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:24.276 12:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:24.843 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.102 12:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.669 00:10:25.669 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.669 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.669 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.927 { 00:10:25.927 "cntlid": 93, 00:10:25.927 "qid": 0, 00:10:25.927 "state": "enabled", 00:10:25.927 "thread": "nvmf_tgt_poll_group_000", 00:10:25.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:25.927 "listen_address": { 00:10:25.927 "trtype": "TCP", 00:10:25.927 "adrfam": "IPv4", 00:10:25.927 "traddr": "10.0.0.3", 00:10:25.927 "trsvcid": "4420" 00:10:25.927 }, 00:10:25.927 "peer_address": { 00:10:25.927 "trtype": "TCP", 00:10:25.927 "adrfam": "IPv4", 00:10:25.927 "traddr": "10.0.0.1", 00:10:25.927 "trsvcid": "56424" 00:10:25.927 }, 00:10:25.927 "auth": { 00:10:25.927 "state": "completed", 00:10:25.927 "digest": "sha384", 00:10:25.927 "dhgroup": "ffdhe8192" 00:10:25.927 } 00:10:25.927 } 00:10:25.927 ]' 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.927 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:25.928 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.928 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.928 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.928 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.186 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:26.186 12:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:26.753 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.012 12:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.604 00:10:27.604 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.604 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.604 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.877 { 00:10:27.877 "cntlid": 95, 00:10:27.877 "qid": 0, 00:10:27.877 "state": "enabled", 00:10:27.877 "thread": "nvmf_tgt_poll_group_000", 00:10:27.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:27.877 "listen_address": { 00:10:27.877 "trtype": "TCP", 00:10:27.877 "adrfam": "IPv4", 00:10:27.877 "traddr": "10.0.0.3", 00:10:27.877 "trsvcid": "4420" 00:10:27.877 }, 00:10:27.877 "peer_address": { 00:10:27.877 "trtype": "TCP", 00:10:27.877 "adrfam": "IPv4", 00:10:27.877 "traddr": "10.0.0.1", 00:10:27.877 "trsvcid": "56456" 00:10:27.877 }, 00:10:27.877 "auth": { 00:10:27.877 "state": "completed", 00:10:27.877 "digest": "sha384", 00:10:27.877 "dhgroup": "ffdhe8192" 00:10:27.877 } 00:10:27.877 } 00:10:27.877 ]' 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.877 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.444 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:28.444 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:29.011 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.270 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.529 00:10:29.529 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.529 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.529 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.787 { 00:10:29.787 "cntlid": 97, 00:10:29.787 "qid": 0, 00:10:29.787 "state": "enabled", 00:10:29.787 "thread": "nvmf_tgt_poll_group_000", 00:10:29.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:29.787 "listen_address": { 00:10:29.787 "trtype": "TCP", 00:10:29.787 "adrfam": "IPv4", 00:10:29.787 "traddr": "10.0.0.3", 00:10:29.787 "trsvcid": "4420" 00:10:29.787 }, 00:10:29.787 "peer_address": { 00:10:29.787 "trtype": "TCP", 00:10:29.787 "adrfam": "IPv4", 00:10:29.787 "traddr": "10.0.0.1", 00:10:29.787 "trsvcid": "56466" 00:10:29.787 }, 00:10:29.787 "auth": { 00:10:29.787 "state": "completed", 00:10:29.787 "digest": "sha512", 00:10:29.787 "dhgroup": "null" 00:10:29.787 } 00:10:29.787 } 00:10:29.787 ]' 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:29.787 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.046 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.046 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.046 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.304 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:30.304 12:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:30.870 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.871 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:30.871 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.871 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.871 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.871 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.871 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:30.871 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:31.129 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:10:31.129 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.129 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:31.129 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:31.129 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.130 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.389 00:10:31.389 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.389 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.389 12:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.647 { 00:10:31.647 "cntlid": 99, 00:10:31.647 "qid": 0, 00:10:31.647 "state": "enabled", 00:10:31.647 "thread": "nvmf_tgt_poll_group_000", 00:10:31.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:31.647 "listen_address": { 00:10:31.647 "trtype": "TCP", 00:10:31.647 "adrfam": "IPv4", 00:10:31.647 "traddr": "10.0.0.3", 00:10:31.647 "trsvcid": "4420" 00:10:31.647 }, 00:10:31.647 "peer_address": { 00:10:31.647 "trtype": "TCP", 00:10:31.647 "adrfam": "IPv4", 00:10:31.647 "traddr": "10.0.0.1", 00:10:31.647 "trsvcid": "56492" 00:10:31.647 }, 00:10:31.647 "auth": { 00:10:31.647 "state": "completed", 00:10:31.647 "digest": "sha512", 00:10:31.647 "dhgroup": "null" 00:10:31.647 } 00:10:31.647 } 00:10:31.647 ]' 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.647 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.906 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:31.906 12:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:32.473 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.732 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.990 00:10:32.990 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.990 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.990 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.248 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.248 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.248 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.248 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.248 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.507 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.507 { 00:10:33.507 "cntlid": 101, 00:10:33.507 "qid": 0, 00:10:33.507 "state": "enabled", 00:10:33.507 "thread": "nvmf_tgt_poll_group_000", 00:10:33.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:33.507 "listen_address": { 00:10:33.507 "trtype": "TCP", 00:10:33.507 "adrfam": "IPv4", 00:10:33.507 "traddr": "10.0.0.3", 00:10:33.507 "trsvcid": "4420" 00:10:33.507 }, 00:10:33.507 "peer_address": { 00:10:33.507 "trtype": "TCP", 00:10:33.507 "adrfam": "IPv4", 00:10:33.507 "traddr": "10.0.0.1", 00:10:33.507 "trsvcid": "56518" 00:10:33.507 }, 00:10:33.507 "auth": { 00:10:33.507 "state": "completed", 00:10:33.507 "digest": "sha512", 00:10:33.507 "dhgroup": "null" 00:10:33.507 } 00:10:33.507 } 00:10:33.507 ]' 00:10:33.507 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.507 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:33.507 12:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.507 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:33.507 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.507 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.507 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.507 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.765 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:33.765 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:34.333 12:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:34.899 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:10:34.899 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.899 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:34.899 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:34.899 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:34.899 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.899 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:34.900 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.900 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.900 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.900 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:34.900 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.900 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:35.158 00:10:35.158 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.158 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.158 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.416 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.416 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.416 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.417 { 00:10:35.417 "cntlid": 103, 00:10:35.417 "qid": 0, 00:10:35.417 "state": "enabled", 00:10:35.417 "thread": "nvmf_tgt_poll_group_000", 00:10:35.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:35.417 "listen_address": { 00:10:35.417 "trtype": "TCP", 00:10:35.417 "adrfam": "IPv4", 00:10:35.417 "traddr": "10.0.0.3", 00:10:35.417 "trsvcid": "4420" 00:10:35.417 }, 00:10:35.417 "peer_address": { 00:10:35.417 "trtype": "TCP", 00:10:35.417 "adrfam": "IPv4", 00:10:35.417 "traddr": "10.0.0.1", 00:10:35.417 "trsvcid": "48658" 00:10:35.417 }, 00:10:35.417 "auth": { 00:10:35.417 "state": "completed", 00:10:35.417 "digest": "sha512", 00:10:35.417 "dhgroup": "null" 00:10:35.417 } 00:10:35.417 } 00:10:35.417 ]' 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:35.417 12:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.417 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.417 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.417 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.675 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:35.675 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:36.244 12:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.501 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.066 00:10:37.066 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.066 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.066 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.325 { 00:10:37.325 "cntlid": 105, 00:10:37.325 "qid": 0, 00:10:37.325 "state": "enabled", 00:10:37.325 "thread": "nvmf_tgt_poll_group_000", 00:10:37.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:37.325 "listen_address": { 00:10:37.325 "trtype": "TCP", 00:10:37.325 "adrfam": "IPv4", 00:10:37.325 "traddr": "10.0.0.3", 00:10:37.325 "trsvcid": "4420" 00:10:37.325 }, 00:10:37.325 "peer_address": { 00:10:37.325 "trtype": "TCP", 00:10:37.325 "adrfam": "IPv4", 00:10:37.325 "traddr": "10.0.0.1", 00:10:37.325 "trsvcid": "48682" 00:10:37.325 }, 00:10:37.325 "auth": { 00:10:37.325 "state": "completed", 00:10:37.325 "digest": "sha512", 00:10:37.325 "dhgroup": "ffdhe2048" 00:10:37.325 } 00:10:37.325 } 00:10:37.325 ]' 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.325 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.584 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:37.584 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:38.520 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:38.778 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:10:38.778 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.778 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:38.778 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:38.778 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:38.778 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.779 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.779 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.779 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.779 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.779 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.779 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.779 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.037 00:10:39.037 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.037 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.037 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.296 { 00:10:39.296 "cntlid": 107, 00:10:39.296 "qid": 0, 00:10:39.296 "state": "enabled", 00:10:39.296 "thread": "nvmf_tgt_poll_group_000", 00:10:39.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:39.296 "listen_address": { 00:10:39.296 "trtype": "TCP", 00:10:39.296 "adrfam": "IPv4", 00:10:39.296 "traddr": "10.0.0.3", 00:10:39.296 "trsvcid": "4420" 00:10:39.296 }, 00:10:39.296 "peer_address": { 00:10:39.296 "trtype": "TCP", 00:10:39.296 "adrfam": "IPv4", 00:10:39.296 "traddr": "10.0.0.1", 00:10:39.296 "trsvcid": "48696" 00:10:39.296 }, 00:10:39.296 "auth": { 00:10:39.296 "state": "completed", 00:10:39.296 "digest": "sha512", 00:10:39.296 "dhgroup": "ffdhe2048" 00:10:39.296 } 00:10:39.296 } 00:10:39.296 ]' 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:39.296 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.555 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:39.555 12:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.555 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.555 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.555 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.813 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:39.813 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:40.379 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.380 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:40.380 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.380 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.380 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.380 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.380 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:40.380 12:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.638 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.896 00:10:40.896 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.896 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.896 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.155 { 00:10:41.155 "cntlid": 109, 00:10:41.155 "qid": 0, 00:10:41.155 "state": "enabled", 00:10:41.155 "thread": "nvmf_tgt_poll_group_000", 00:10:41.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:41.155 "listen_address": { 00:10:41.155 "trtype": "TCP", 00:10:41.155 "adrfam": "IPv4", 00:10:41.155 "traddr": "10.0.0.3", 00:10:41.155 "trsvcid": "4420" 00:10:41.155 }, 00:10:41.155 "peer_address": { 00:10:41.155 "trtype": "TCP", 00:10:41.155 "adrfam": "IPv4", 00:10:41.155 "traddr": "10.0.0.1", 00:10:41.155 "trsvcid": "48720" 00:10:41.155 }, 00:10:41.155 "auth": { 00:10:41.155 "state": "completed", 00:10:41.155 "digest": "sha512", 00:10:41.155 "dhgroup": "ffdhe2048" 00:10:41.155 } 00:10:41.155 } 00:10:41.155 ]' 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.155 12:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.722 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:41.722 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:42.289 12:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.548 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.806 00:10:42.806 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.806 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.806 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.373 { 00:10:43.373 "cntlid": 111, 00:10:43.373 "qid": 0, 00:10:43.373 "state": "enabled", 00:10:43.373 "thread": "nvmf_tgt_poll_group_000", 00:10:43.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:43.373 "listen_address": { 00:10:43.373 "trtype": "TCP", 00:10:43.373 "adrfam": "IPv4", 00:10:43.373 "traddr": "10.0.0.3", 00:10:43.373 "trsvcid": "4420" 00:10:43.373 }, 00:10:43.373 "peer_address": { 00:10:43.373 "trtype": "TCP", 00:10:43.373 "adrfam": "IPv4", 00:10:43.373 "traddr": "10.0.0.1", 00:10:43.373 "trsvcid": "48744" 00:10:43.373 }, 00:10:43.373 "auth": { 00:10:43.373 "state": "completed", 00:10:43.373 "digest": "sha512", 00:10:43.373 "dhgroup": "ffdhe2048" 00:10:43.373 } 00:10:43.373 } 00:10:43.373 ]' 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.373 12:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.632 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:43.632 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:44.199 12:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.458 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.717 00:10:44.717 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.717 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.717 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.281 { 00:10:45.281 "cntlid": 113, 00:10:45.281 "qid": 0, 00:10:45.281 "state": "enabled", 00:10:45.281 "thread": "nvmf_tgt_poll_group_000", 00:10:45.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:45.281 "listen_address": { 00:10:45.281 "trtype": "TCP", 00:10:45.281 "adrfam": "IPv4", 00:10:45.281 "traddr": "10.0.0.3", 00:10:45.281 "trsvcid": "4420" 00:10:45.281 }, 00:10:45.281 "peer_address": { 00:10:45.281 "trtype": "TCP", 00:10:45.281 "adrfam": "IPv4", 00:10:45.281 "traddr": "10.0.0.1", 00:10:45.281 "trsvcid": "52648" 00:10:45.281 }, 00:10:45.281 "auth": { 00:10:45.281 "state": "completed", 00:10:45.281 "digest": "sha512", 00:10:45.281 "dhgroup": "ffdhe3072" 00:10:45.281 } 00:10:45.281 } 00:10:45.281 ]' 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.281 12:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.539 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:45.539 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:46.106 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.365 12:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.623 00:10:46.623 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.881 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.881 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.140 { 00:10:47.140 "cntlid": 115, 00:10:47.140 "qid": 0, 00:10:47.140 "state": "enabled", 00:10:47.140 "thread": "nvmf_tgt_poll_group_000", 00:10:47.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:47.140 "listen_address": { 00:10:47.140 "trtype": "TCP", 00:10:47.140 "adrfam": "IPv4", 00:10:47.140 "traddr": "10.0.0.3", 00:10:47.140 "trsvcid": "4420" 00:10:47.140 }, 00:10:47.140 "peer_address": { 00:10:47.140 "trtype": "TCP", 00:10:47.140 "adrfam": "IPv4", 00:10:47.140 "traddr": "10.0.0.1", 00:10:47.140 "trsvcid": "52674" 00:10:47.140 }, 00:10:47.140 "auth": { 00:10:47.140 "state": "completed", 00:10:47.140 "digest": "sha512", 00:10:47.140 "dhgroup": "ffdhe3072" 00:10:47.140 } 00:10:47.140 } 00:10:47.140 ]' 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.140 12:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.399 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:47.399 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:48.334 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.335 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.908 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.908 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.908 { 00:10:48.908 "cntlid": 117, 00:10:48.908 "qid": 0, 00:10:48.908 "state": "enabled", 00:10:48.908 "thread": "nvmf_tgt_poll_group_000", 00:10:48.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:48.908 "listen_address": { 00:10:48.908 "trtype": "TCP", 00:10:48.908 "adrfam": "IPv4", 00:10:48.908 "traddr": "10.0.0.3", 00:10:48.908 "trsvcid": "4420" 00:10:48.908 }, 00:10:48.908 "peer_address": { 00:10:48.908 "trtype": "TCP", 00:10:48.908 "adrfam": "IPv4", 00:10:48.908 "traddr": "10.0.0.1", 00:10:48.908 "trsvcid": "52706" 00:10:48.908 }, 00:10:48.908 "auth": { 00:10:48.908 "state": "completed", 00:10:48.908 "digest": "sha512", 00:10:48.908 "dhgroup": "ffdhe3072" 00:10:48.908 } 00:10:48.908 } 00:10:48.908 ]' 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.174 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.432 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:49.432 12:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:49.999 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:50.257 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:10:50.257 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.258 12:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.516 00:10:50.516 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.516 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.516 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.083 { 00:10:51.083 "cntlid": 119, 00:10:51.083 "qid": 0, 00:10:51.083 "state": "enabled", 00:10:51.083 "thread": "nvmf_tgt_poll_group_000", 00:10:51.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:51.083 "listen_address": { 00:10:51.083 "trtype": "TCP", 00:10:51.083 "adrfam": "IPv4", 00:10:51.083 "traddr": "10.0.0.3", 00:10:51.083 "trsvcid": "4420" 00:10:51.083 }, 00:10:51.083 "peer_address": { 00:10:51.083 "trtype": "TCP", 00:10:51.083 "adrfam": "IPv4", 00:10:51.083 "traddr": "10.0.0.1", 00:10:51.083 "trsvcid": "52726" 00:10:51.083 }, 00:10:51.083 "auth": { 00:10:51.083 "state": "completed", 00:10:51.083 "digest": "sha512", 00:10:51.083 "dhgroup": "ffdhe3072" 00:10:51.083 } 00:10:51.083 } 00:10:51.083 ]' 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.083 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.341 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:51.341 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:51.944 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:52.217 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.218 12:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.784 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.784 { 00:10:52.784 "cntlid": 121, 00:10:52.784 "qid": 0, 00:10:52.784 "state": "enabled", 00:10:52.784 "thread": "nvmf_tgt_poll_group_000", 00:10:52.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:52.784 "listen_address": { 00:10:52.784 "trtype": "TCP", 00:10:52.784 "adrfam": "IPv4", 00:10:52.784 "traddr": "10.0.0.3", 00:10:52.784 "trsvcid": "4420" 00:10:52.784 }, 00:10:52.784 "peer_address": { 00:10:52.784 "trtype": "TCP", 00:10:52.784 "adrfam": "IPv4", 00:10:52.784 "traddr": "10.0.0.1", 00:10:52.784 "trsvcid": "52744" 00:10:52.784 }, 00:10:52.784 "auth": { 00:10:52.784 "state": "completed", 00:10:52.784 "digest": "sha512", 00:10:52.784 "dhgroup": "ffdhe4096" 00:10:52.784 } 00:10:52.784 } 00:10:52.784 ]' 00:10:52.784 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.043 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:53.043 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.043 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:53.043 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.043 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.043 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.043 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.301 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:53.301 12:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:10:53.867 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.867 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:53.867 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.867 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.868 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.868 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:53.868 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.126 12:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.384 00:10:54.385 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.385 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.385 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.643 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.901 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.901 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.901 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.901 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.901 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.901 { 00:10:54.902 "cntlid": 123, 00:10:54.902 "qid": 0, 00:10:54.902 "state": "enabled", 00:10:54.902 "thread": "nvmf_tgt_poll_group_000", 00:10:54.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:54.902 "listen_address": { 00:10:54.902 "trtype": "TCP", 00:10:54.902 "adrfam": "IPv4", 00:10:54.902 "traddr": "10.0.0.3", 00:10:54.902 "trsvcid": "4420" 00:10:54.902 }, 00:10:54.902 "peer_address": { 00:10:54.902 "trtype": "TCP", 00:10:54.902 "adrfam": "IPv4", 00:10:54.902 "traddr": "10.0.0.1", 00:10:54.902 "trsvcid": "38746" 00:10:54.902 }, 00:10:54.902 "auth": { 00:10:54.902 "state": "completed", 00:10:54.902 "digest": "sha512", 00:10:54.902 "dhgroup": "ffdhe4096" 00:10:54.902 } 00:10:54.902 } 00:10:54.902 ]' 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.902 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.161 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:55.161 12:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:55.728 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.987 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.555 00:10:56.555 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.555 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.555 12:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.814 { 00:10:56.814 "cntlid": 125, 00:10:56.814 "qid": 0, 00:10:56.814 "state": "enabled", 00:10:56.814 "thread": "nvmf_tgt_poll_group_000", 00:10:56.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:56.814 "listen_address": { 00:10:56.814 "trtype": "TCP", 00:10:56.814 "adrfam": "IPv4", 00:10:56.814 "traddr": "10.0.0.3", 00:10:56.814 "trsvcid": "4420" 00:10:56.814 }, 00:10:56.814 "peer_address": { 00:10:56.814 "trtype": "TCP", 00:10:56.814 "adrfam": "IPv4", 00:10:56.814 "traddr": "10.0.0.1", 00:10:56.814 "trsvcid": "38754" 00:10:56.814 }, 00:10:56.814 "auth": { 00:10:56.814 "state": "completed", 00:10:56.814 "digest": "sha512", 00:10:56.814 "dhgroup": "ffdhe4096" 00:10:56.814 } 00:10:56.814 } 00:10:56.814 ]' 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.814 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.073 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:57.073 12:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:10:58.009 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.010 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.577 00:10:58.577 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.577 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.577 12:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.836 { 00:10:58.836 "cntlid": 127, 00:10:58.836 "qid": 0, 00:10:58.836 "state": "enabled", 00:10:58.836 "thread": "nvmf_tgt_poll_group_000", 00:10:58.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:10:58.836 "listen_address": { 00:10:58.836 "trtype": "TCP", 00:10:58.836 "adrfam": "IPv4", 00:10:58.836 "traddr": "10.0.0.3", 00:10:58.836 "trsvcid": "4420" 00:10:58.836 }, 00:10:58.836 "peer_address": { 00:10:58.836 "trtype": "TCP", 00:10:58.836 "adrfam": "IPv4", 00:10:58.836 "traddr": "10.0.0.1", 00:10:58.836 "trsvcid": "38788" 00:10:58.836 }, 00:10:58.836 "auth": { 00:10:58.836 "state": "completed", 00:10:58.836 "digest": "sha512", 00:10:58.836 "dhgroup": "ffdhe4096" 00:10:58.836 } 00:10:58.836 } 00:10:58.836 ]' 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.836 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:58.837 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.837 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.837 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.837 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.837 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.837 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.095 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:59.095 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.661 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:59.662 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.920 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.487 00:11:00.487 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.487 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.487 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.746 { 00:11:00.746 "cntlid": 129, 00:11:00.746 "qid": 0, 00:11:00.746 "state": "enabled", 00:11:00.746 "thread": "nvmf_tgt_poll_group_000", 00:11:00.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:00.746 "listen_address": { 00:11:00.746 "trtype": "TCP", 00:11:00.746 "adrfam": "IPv4", 00:11:00.746 "traddr": "10.0.0.3", 00:11:00.746 "trsvcid": "4420" 00:11:00.746 }, 00:11:00.746 "peer_address": { 00:11:00.746 "trtype": "TCP", 00:11:00.746 "adrfam": "IPv4", 00:11:00.746 "traddr": "10.0.0.1", 00:11:00.746 "trsvcid": "38806" 00:11:00.746 }, 00:11:00.746 "auth": { 00:11:00.746 "state": "completed", 00:11:00.746 "digest": "sha512", 00:11:00.746 "dhgroup": "ffdhe6144" 00:11:00.746 } 00:11:00.746 } 00:11:00.746 ]' 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.746 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.005 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:11:01.005 12:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:01.572 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.831 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.398 00:11:02.398 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.398 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.398 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.657 { 00:11:02.657 "cntlid": 131, 00:11:02.657 "qid": 0, 00:11:02.657 "state": "enabled", 00:11:02.657 "thread": "nvmf_tgt_poll_group_000", 00:11:02.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:02.657 "listen_address": { 00:11:02.657 "trtype": "TCP", 00:11:02.657 "adrfam": "IPv4", 00:11:02.657 "traddr": "10.0.0.3", 00:11:02.657 "trsvcid": "4420" 00:11:02.657 }, 00:11:02.657 "peer_address": { 00:11:02.657 "trtype": "TCP", 00:11:02.657 "adrfam": "IPv4", 00:11:02.657 "traddr": "10.0.0.1", 00:11:02.657 "trsvcid": "38840" 00:11:02.657 }, 00:11:02.657 "auth": { 00:11:02.657 "state": "completed", 00:11:02.657 "digest": "sha512", 00:11:02.657 "dhgroup": "ffdhe6144" 00:11:02.657 } 00:11:02.657 } 00:11:02.657 ]' 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:02.657 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.915 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:02.915 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.915 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.915 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.915 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.174 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:11:03.174 12:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:03.741 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.000 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.566 00:11:04.566 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.566 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.566 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.825 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.825 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.825 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.825 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.825 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.825 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.825 { 00:11:04.825 "cntlid": 133, 00:11:04.825 "qid": 0, 00:11:04.825 "state": "enabled", 00:11:04.825 "thread": "nvmf_tgt_poll_group_000", 00:11:04.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:04.825 "listen_address": { 00:11:04.825 "trtype": "TCP", 00:11:04.825 "adrfam": "IPv4", 00:11:04.825 "traddr": "10.0.0.3", 00:11:04.825 "trsvcid": "4420" 00:11:04.825 }, 00:11:04.825 "peer_address": { 00:11:04.825 "trtype": "TCP", 00:11:04.825 "adrfam": "IPv4", 00:11:04.825 "traddr": "10.0.0.1", 00:11:04.825 "trsvcid": "37218" 00:11:04.825 }, 00:11:04.825 "auth": { 00:11:04.825 "state": "completed", 00:11:04.825 "digest": "sha512", 00:11:04.825 "dhgroup": "ffdhe6144" 00:11:04.825 } 00:11:04.825 } 00:11:04.825 ]' 00:11:04.825 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.084 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:05.084 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.084 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:05.084 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.084 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.084 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.084 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.342 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:11:05.342 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:11:05.908 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.908 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:05.909 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.909 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.909 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.909 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.909 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:05.909 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.167 12:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.734 00:11:06.734 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.734 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.734 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.993 { 00:11:06.993 "cntlid": 135, 00:11:06.993 "qid": 0, 00:11:06.993 "state": "enabled", 00:11:06.993 "thread": "nvmf_tgt_poll_group_000", 00:11:06.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:06.993 "listen_address": { 00:11:06.993 "trtype": "TCP", 00:11:06.993 "adrfam": "IPv4", 00:11:06.993 "traddr": "10.0.0.3", 00:11:06.993 "trsvcid": "4420" 00:11:06.993 }, 00:11:06.993 "peer_address": { 00:11:06.993 "trtype": "TCP", 00:11:06.993 "adrfam": "IPv4", 00:11:06.993 "traddr": "10.0.0.1", 00:11:06.993 "trsvcid": "37244" 00:11:06.993 }, 00:11:06.993 "auth": { 00:11:06.993 "state": "completed", 00:11:06.993 "digest": "sha512", 00:11:06.993 "dhgroup": "ffdhe6144" 00:11:06.993 } 00:11:06.993 } 00:11:06.993 ]' 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:06.993 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.251 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.251 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.251 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.510 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:07.510 12:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:08.076 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:08.334 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:11:08.334 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.334 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:08.334 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:08.334 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:08.334 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.334 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.335 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.335 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.335 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.335 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.335 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.335 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.901 00:11:08.901 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.901 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.901 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.159 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.159 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.159 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.160 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.160 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.160 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.160 { 00:11:09.160 "cntlid": 137, 00:11:09.160 "qid": 0, 00:11:09.160 "state": "enabled", 00:11:09.160 "thread": "nvmf_tgt_poll_group_000", 00:11:09.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:09.160 "listen_address": { 00:11:09.160 "trtype": "TCP", 00:11:09.160 "adrfam": "IPv4", 00:11:09.160 "traddr": "10.0.0.3", 00:11:09.160 "trsvcid": "4420" 00:11:09.160 }, 00:11:09.160 "peer_address": { 00:11:09.160 "trtype": "TCP", 00:11:09.160 "adrfam": "IPv4", 00:11:09.160 "traddr": "10.0.0.1", 00:11:09.160 "trsvcid": "37280" 00:11:09.160 }, 00:11:09.160 "auth": { 00:11:09.160 "state": "completed", 00:11:09.160 "digest": "sha512", 00:11:09.160 "dhgroup": "ffdhe8192" 00:11:09.160 } 00:11:09.160 } 00:11:09.160 ]' 00:11:09.160 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.418 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:09.418 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.418 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:09.418 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.418 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.418 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.418 12:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.676 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:11:09.676 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:10.243 12:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.501 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.068 00:11:11.068 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.068 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.068 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.326 { 00:11:11.326 "cntlid": 139, 00:11:11.326 "qid": 0, 00:11:11.326 "state": "enabled", 00:11:11.326 "thread": "nvmf_tgt_poll_group_000", 00:11:11.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:11.326 "listen_address": { 00:11:11.326 "trtype": "TCP", 00:11:11.326 "adrfam": "IPv4", 00:11:11.326 "traddr": "10.0.0.3", 00:11:11.326 "trsvcid": "4420" 00:11:11.326 }, 00:11:11.326 "peer_address": { 00:11:11.326 "trtype": "TCP", 00:11:11.326 "adrfam": "IPv4", 00:11:11.326 "traddr": "10.0.0.1", 00:11:11.326 "trsvcid": "37306" 00:11:11.326 }, 00:11:11.326 "auth": { 00:11:11.326 "state": "completed", 00:11:11.326 "digest": "sha512", 00:11:11.326 "dhgroup": "ffdhe8192" 00:11:11.326 } 00:11:11.326 } 00:11:11.326 ]' 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:11.326 12:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.584 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:11.584 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.584 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.584 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.584 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.843 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:11:11.843 12:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: --dhchap-ctrl-secret DHHC-1:02:YWYxOGUwMzVjZWRkYmY3NGVkNDhjMTEwYzI1ODMxOTZmODcxMDVjZDA0MDBkMmJipg3asA==: 00:11:12.410 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.668 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:12.668 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.668 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.669 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.669 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.669 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:12.669 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.927 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.494 00:11:13.494 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.494 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.494 12:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.753 { 00:11:13.753 "cntlid": 141, 00:11:13.753 "qid": 0, 00:11:13.753 "state": "enabled", 00:11:13.753 "thread": "nvmf_tgt_poll_group_000", 00:11:13.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:13.753 "listen_address": { 00:11:13.753 "trtype": "TCP", 00:11:13.753 "adrfam": "IPv4", 00:11:13.753 "traddr": "10.0.0.3", 00:11:13.753 "trsvcid": "4420" 00:11:13.753 }, 00:11:13.753 "peer_address": { 00:11:13.753 "trtype": "TCP", 00:11:13.753 "adrfam": "IPv4", 00:11:13.753 "traddr": "10.0.0.1", 00:11:13.753 "trsvcid": "37314" 00:11:13.753 }, 00:11:13.753 "auth": { 00:11:13.753 "state": "completed", 00:11:13.753 "digest": "sha512", 00:11:13.753 "dhgroup": "ffdhe8192" 00:11:13.753 } 00:11:13.753 } 00:11:13.753 ]' 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.753 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.320 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:11:14.320 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:01:YmMxNGRkYThiN2MxOGZjN2IzMTRmNGY3NzNiYTI2YmT+DzyX: 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:14.888 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.146 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.713 00:11:15.713 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.713 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.713 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.982 { 00:11:15.982 "cntlid": 143, 00:11:15.982 "qid": 0, 00:11:15.982 "state": "enabled", 00:11:15.982 "thread": "nvmf_tgt_poll_group_000", 00:11:15.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:15.982 "listen_address": { 00:11:15.982 "trtype": "TCP", 00:11:15.982 "adrfam": "IPv4", 00:11:15.982 "traddr": "10.0.0.3", 00:11:15.982 "trsvcid": "4420" 00:11:15.982 }, 00:11:15.982 "peer_address": { 00:11:15.982 "trtype": "TCP", 00:11:15.982 "adrfam": "IPv4", 00:11:15.982 "traddr": "10.0.0.1", 00:11:15.982 "trsvcid": "39090" 00:11:15.982 }, 00:11:15.982 "auth": { 00:11:15.982 "state": "completed", 00:11:15.982 "digest": "sha512", 00:11:15.982 "dhgroup": "ffdhe8192" 00:11:15.982 } 00:11:15.982 } 00:11:15.982 ]' 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:15.982 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.255 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:16.255 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.255 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.255 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.255 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.513 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:16.513 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:17.080 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.339 12:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.906 00:11:17.906 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.906 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.906 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.165 { 00:11:18.165 "cntlid": 145, 00:11:18.165 "qid": 0, 00:11:18.165 "state": "enabled", 00:11:18.165 "thread": "nvmf_tgt_poll_group_000", 00:11:18.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:18.165 "listen_address": { 00:11:18.165 "trtype": "TCP", 00:11:18.165 "adrfam": "IPv4", 00:11:18.165 "traddr": "10.0.0.3", 00:11:18.165 "trsvcid": "4420" 00:11:18.165 }, 00:11:18.165 "peer_address": { 00:11:18.165 "trtype": "TCP", 00:11:18.165 "adrfam": "IPv4", 00:11:18.165 "traddr": "10.0.0.1", 00:11:18.165 "trsvcid": "39122" 00:11:18.165 }, 00:11:18.165 "auth": { 00:11:18.165 "state": "completed", 00:11:18.165 "digest": "sha512", 00:11:18.165 "dhgroup": "ffdhe8192" 00:11:18.165 } 00:11:18.165 } 00:11:18.165 ]' 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.165 12:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.424 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:11:18.424 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:00:OThkM2E5MDBkMmQ1MWE4NzFlNDZkOWQwMWU4Mzc5YmY1MmVkMzQ4MmZiYTVkZjEzgVELSw==: --dhchap-ctrl-secret DHHC-1:03:MjQ2YjI4OTY0NDM1ZDFkMTE3ZDQ4MTQ4OWQ4OGIzYjRjMWI0MDkzZWU1NDM2ZjNiMTkxYWE2ZDRhMzg5ZjIxOZfqZow=: 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:19.358 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:19.359 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:19.359 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:19.359 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:11:19.359 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:19.359 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:19.617 request: 00:11:19.617 { 00:11:19.617 "name": "nvme0", 00:11:19.617 "trtype": "tcp", 00:11:19.617 "traddr": "10.0.0.3", 00:11:19.617 "adrfam": "ipv4", 00:11:19.617 "trsvcid": "4420", 00:11:19.617 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:19.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:19.617 "prchk_reftag": false, 00:11:19.617 "prchk_guard": false, 00:11:19.617 "hdgst": false, 00:11:19.617 "ddgst": false, 00:11:19.617 "dhchap_key": "key2", 00:11:19.617 "allow_unrecognized_csi": false, 00:11:19.617 "method": "bdev_nvme_attach_controller", 00:11:19.617 "req_id": 1 00:11:19.617 } 00:11:19.617 Got JSON-RPC error response 00:11:19.617 response: 00:11:19.617 { 00:11:19.617 "code": -5, 00:11:19.617 "message": "Input/output error" 00:11:19.617 } 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.876 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:19.877 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:20.444 request: 00:11:20.444 { 00:11:20.444 "name": "nvme0", 00:11:20.444 "trtype": "tcp", 00:11:20.444 "traddr": "10.0.0.3", 00:11:20.444 "adrfam": "ipv4", 00:11:20.444 "trsvcid": "4420", 00:11:20.444 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:20.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:20.444 "prchk_reftag": false, 00:11:20.444 "prchk_guard": false, 00:11:20.444 "hdgst": false, 00:11:20.444 "ddgst": false, 00:11:20.444 "dhchap_key": "key1", 00:11:20.444 "dhchap_ctrlr_key": "ckey2", 00:11:20.444 "allow_unrecognized_csi": false, 00:11:20.444 "method": "bdev_nvme_attach_controller", 00:11:20.444 "req_id": 1 00:11:20.444 } 00:11:20.444 Got JSON-RPC error response 00:11:20.444 response: 00:11:20.444 { 00:11:20.444 "code": -5, 00:11:20.444 "message": "Input/output error" 00:11:20.444 } 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.444 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:20.445 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.445 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:20.445 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.445 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.445 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.445 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.704 request: 00:11:20.704 { 00:11:20.704 "name": "nvme0", 00:11:20.704 "trtype": "tcp", 00:11:20.704 "traddr": "10.0.0.3", 00:11:20.704 "adrfam": "ipv4", 00:11:20.704 "trsvcid": "4420", 00:11:20.704 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:20.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:20.704 "prchk_reftag": false, 00:11:20.704 "prchk_guard": false, 00:11:20.704 "hdgst": false, 00:11:20.704 "ddgst": false, 00:11:20.704 "dhchap_key": "key1", 00:11:20.704 "dhchap_ctrlr_key": "ckey1", 00:11:20.704 "allow_unrecognized_csi": false, 00:11:20.704 "method": "bdev_nvme_attach_controller", 00:11:20.704 "req_id": 1 00:11:20.704 } 00:11:20.704 Got JSON-RPC error response 00:11:20.704 response: 00:11:20.704 { 00:11:20.704 "code": -5, 00:11:20.704 "message": "Input/output error" 00:11:20.704 } 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66871 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66871 ']' 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66871 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66871 00:11:20.963 killing process with pid 66871 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66871' 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66871 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66871 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69819 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69819 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69819 ']' 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.963 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.222 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.222 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:21.222 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.222 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.222 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 69819 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69819 ']' 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.481 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 null0 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wGd 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.oW0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oW0 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vBc 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.6lR ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6lR 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ftR 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.R4a ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.R4a 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6a7 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.740 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:22.678 nvme0n1 00:11:22.678 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.678 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.678 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.937 { 00:11:22.937 "cntlid": 1, 00:11:22.937 "qid": 0, 00:11:22.937 "state": "enabled", 00:11:22.937 "thread": "nvmf_tgt_poll_group_000", 00:11:22.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:22.937 "listen_address": { 00:11:22.937 "trtype": "TCP", 00:11:22.937 "adrfam": "IPv4", 00:11:22.937 "traddr": "10.0.0.3", 00:11:22.937 "trsvcid": "4420" 00:11:22.937 }, 00:11:22.937 "peer_address": { 00:11:22.937 "trtype": "TCP", 00:11:22.937 "adrfam": "IPv4", 00:11:22.937 "traddr": "10.0.0.1", 00:11:22.937 "trsvcid": "39166" 00:11:22.937 }, 00:11:22.937 "auth": { 00:11:22.937 "state": "completed", 00:11:22.937 "digest": "sha512", 00:11:22.937 "dhgroup": "ffdhe8192" 00:11:22.937 } 00:11:22.937 } 00:11:22.937 ]' 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:22.937 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.195 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.195 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.195 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.453 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:23.453 12:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key3 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:11:24.019 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.278 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.536 request: 00:11:24.536 { 00:11:24.536 "name": "nvme0", 00:11:24.536 "trtype": "tcp", 00:11:24.536 "traddr": "10.0.0.3", 00:11:24.536 "adrfam": "ipv4", 00:11:24.536 "trsvcid": "4420", 00:11:24.536 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:24.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:24.536 "prchk_reftag": false, 00:11:24.536 "prchk_guard": false, 00:11:24.536 "hdgst": false, 00:11:24.536 "ddgst": false, 00:11:24.537 "dhchap_key": "key3", 00:11:24.537 "allow_unrecognized_csi": false, 00:11:24.537 "method": "bdev_nvme_attach_controller", 00:11:24.537 "req_id": 1 00:11:24.537 } 00:11:24.537 Got JSON-RPC error response 00:11:24.537 response: 00:11:24.537 { 00:11:24.537 "code": -5, 00:11:24.537 "message": "Input/output error" 00:11:24.537 } 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:24.537 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.795 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.054 request: 00:11:25.054 { 00:11:25.054 "name": "nvme0", 00:11:25.054 "trtype": "tcp", 00:11:25.054 "traddr": "10.0.0.3", 00:11:25.054 "adrfam": "ipv4", 00:11:25.054 "trsvcid": "4420", 00:11:25.054 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:25.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:25.054 "prchk_reftag": false, 00:11:25.054 "prchk_guard": false, 00:11:25.054 "hdgst": false, 00:11:25.054 "ddgst": false, 00:11:25.054 "dhchap_key": "key3", 00:11:25.054 "allow_unrecognized_csi": false, 00:11:25.054 "method": "bdev_nvme_attach_controller", 00:11:25.054 "req_id": 1 00:11:25.054 } 00:11:25.054 Got JSON-RPC error response 00:11:25.054 response: 00:11:25.054 { 00:11:25.054 "code": -5, 00:11:25.054 "message": "Input/output error" 00:11:25.054 } 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:25.054 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:25.313 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:25.880 request: 00:11:25.880 { 00:11:25.880 "name": "nvme0", 00:11:25.880 "trtype": "tcp", 00:11:25.880 "traddr": "10.0.0.3", 00:11:25.880 "adrfam": "ipv4", 00:11:25.880 "trsvcid": "4420", 00:11:25.880 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:25.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:25.880 "prchk_reftag": false, 00:11:25.880 "prchk_guard": false, 00:11:25.880 "hdgst": false, 00:11:25.880 "ddgst": false, 00:11:25.880 "dhchap_key": "key0", 00:11:25.880 "dhchap_ctrlr_key": "key1", 00:11:25.880 "allow_unrecognized_csi": false, 00:11:25.880 "method": "bdev_nvme_attach_controller", 00:11:25.881 "req_id": 1 00:11:25.881 } 00:11:25.881 Got JSON-RPC error response 00:11:25.881 response: 00:11:25.881 { 00:11:25.881 "code": -5, 00:11:25.881 "message": "Input/output error" 00:11:25.881 } 00:11:25.881 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:25.881 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.881 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.881 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.881 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:11:25.881 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:25.881 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:26.140 nvme0n1 00:11:26.140 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:11:26.140 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.140 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:11:26.398 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.398 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.398 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.657 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 00:11:26.657 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.657 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.657 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.657 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:26.657 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:26.657 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:27.593 nvme0n1 00:11:27.593 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:11:27.593 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.593 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.852 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:11:28.111 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.111 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:28.111 12:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid 85bcfa6f-4742-42db-8cde-87c16c4a32fc -l 0 --dhchap-secret DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: --dhchap-ctrl-secret DHHC-1:03:Mzg5YWE1ZTZiNTI1NThjMzVlZThiNmQ3NjYxYTMyM2Q1ODlmMjg5MDVjMTZkNDJlZjc2ZDY0MmVlOTFmOTEzMcQpgLo=: 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.679 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:28.937 12:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:29.503 request: 00:11:29.503 { 00:11:29.503 "name": "nvme0", 00:11:29.503 "trtype": "tcp", 00:11:29.503 "traddr": "10.0.0.3", 00:11:29.503 "adrfam": "ipv4", 00:11:29.503 "trsvcid": "4420", 00:11:29.503 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:29.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc", 00:11:29.503 "prchk_reftag": false, 00:11:29.503 "prchk_guard": false, 00:11:29.503 "hdgst": false, 00:11:29.503 "ddgst": false, 00:11:29.503 "dhchap_key": "key1", 00:11:29.503 "allow_unrecognized_csi": false, 00:11:29.503 "method": "bdev_nvme_attach_controller", 00:11:29.503 "req_id": 1 00:11:29.503 } 00:11:29.503 Got JSON-RPC error response 00:11:29.503 response: 00:11:29.503 { 00:11:29.503 "code": -5, 00:11:29.503 "message": "Input/output error" 00:11:29.503 } 00:11:29.503 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:29.503 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.503 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.503 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.503 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:29.503 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:29.503 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:30.438 nvme0n1 00:11:30.438 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:11:30.438 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:11:30.438 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.697 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.697 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.697 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.955 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:30.955 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.955 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.955 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.956 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:11:30.956 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:30.956 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:31.214 nvme0n1 00:11:31.214 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:11:31.214 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:11:31.214 12:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.473 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.473 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.473 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: '' 2s 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: ]] 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDVjMGFjYTA2ODg4YThlOWQwYTgwOWUzMmZiN2ZjMGYvgiDv: 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:31.732 12:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:33.633 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:11:33.633 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:33.633 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:33.633 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key1 --dhchap-ctrlr-key key2 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: 2s 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: ]] 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTQ5MTIxNzQ0Njc2ZTliZmNiZmY2MjdlYTk1Y2M5MGU3OWU2MTFhODc3NDI0N2I2rqemcA==: 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:33.892 12:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:35.792 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:36.727 nvme0n1 00:11:36.727 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:36.727 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.727 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.727 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.727 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:36.727 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:37.294 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:11:37.294 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.294 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:11:37.552 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.552 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:37.552 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.552 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.552 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.552 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:11:37.552 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:11:37.810 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:11:37.810 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.811 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:38.069 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:38.637 request: 00:11:38.637 { 00:11:38.637 "name": "nvme0", 00:11:38.637 "dhchap_key": "key1", 00:11:38.637 "dhchap_ctrlr_key": "key3", 00:11:38.637 "method": "bdev_nvme_set_keys", 00:11:38.637 "req_id": 1 00:11:38.637 } 00:11:38.637 Got JSON-RPC error response 00:11:38.637 response: 00:11:38.637 { 00:11:38.637 "code": -13, 00:11:38.637 "message": "Permission denied" 00:11:38.637 } 00:11:38.637 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:38.637 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:38.637 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:38.637 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:38.637 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:38.637 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:38.637 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.896 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:11:38.896 12:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:11:39.832 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:39.832 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:39.832 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.091 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:11:40.091 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:40.091 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.091 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.349 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.349 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:40.349 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:40.349 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:41.285 nvme0n1 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:41.285 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:41.894 request: 00:11:41.894 { 00:11:41.894 "name": "nvme0", 00:11:41.894 "dhchap_key": "key2", 00:11:41.894 "dhchap_ctrlr_key": "key0", 00:11:41.894 "method": "bdev_nvme_set_keys", 00:11:41.894 "req_id": 1 00:11:41.894 } 00:11:41.894 Got JSON-RPC error response 00:11:41.894 response: 00:11:41.894 { 00:11:41.894 "code": -13, 00:11:41.894 "message": "Permission denied" 00:11:41.894 } 00:11:41.894 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:41.894 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.894 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.894 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.894 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:41.894 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:41.894 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.158 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:11:42.158 12:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:11:43.094 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:43.095 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.095 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66890 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66890 ']' 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66890 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66890 00:11:43.354 killing process with pid 66890 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66890' 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66890 00:11:43.354 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66890 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.613 rmmod nvme_tcp 00:11:43.613 rmmod nvme_fabrics 00:11:43.613 rmmod nvme_keyring 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 69819 ']' 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 69819 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69819 ']' 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69819 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69819 00:11:43.613 killing process with pid 69819 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69819' 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69819 00:11:43.613 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69819 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:43.872 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wGd /tmp/spdk.key-sha256.vBc /tmp/spdk.key-sha384.ftR /tmp/spdk.key-sha512.6a7 /tmp/spdk.key-sha512.oW0 /tmp/spdk.key-sha384.6lR /tmp/spdk.key-sha256.R4a '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:11:44.132 00:11:44.132 real 2m56.250s 00:11:44.132 user 7m3.142s 00:11:44.132 sys 0m26.244s 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.132 ************************************ 00:11:44.132 END TEST nvmf_auth_target 00:11:44.132 ************************************ 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 ************************************ 00:11:44.132 START TEST nvmf_bdevio_no_huge 00:11:44.132 ************************************ 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:44.132 * Looking for test storage... 00:11:44.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:11:44.132 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:11:44.393 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.394 --rc genhtml_branch_coverage=1 00:11:44.394 --rc genhtml_function_coverage=1 00:11:44.394 --rc genhtml_legend=1 00:11:44.394 --rc geninfo_all_blocks=1 00:11:44.394 --rc geninfo_unexecuted_blocks=1 00:11:44.394 00:11:44.394 ' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.394 --rc genhtml_branch_coverage=1 00:11:44.394 --rc genhtml_function_coverage=1 00:11:44.394 --rc genhtml_legend=1 00:11:44.394 --rc geninfo_all_blocks=1 00:11:44.394 --rc geninfo_unexecuted_blocks=1 00:11:44.394 00:11:44.394 ' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.394 --rc genhtml_branch_coverage=1 00:11:44.394 --rc genhtml_function_coverage=1 00:11:44.394 --rc genhtml_legend=1 00:11:44.394 --rc geninfo_all_blocks=1 00:11:44.394 --rc geninfo_unexecuted_blocks=1 00:11:44.394 00:11:44.394 ' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.394 --rc genhtml_branch_coverage=1 00:11:44.394 --rc genhtml_function_coverage=1 00:11:44.394 --rc genhtml_legend=1 00:11:44.394 --rc geninfo_all_blocks=1 00:11:44.394 --rc geninfo_unexecuted_blocks=1 00:11:44.394 00:11:44.394 ' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.394 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:44.395 Cannot find device "nvmf_init_br" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:44.395 Cannot find device "nvmf_init_br2" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:44.395 Cannot find device "nvmf_tgt_br" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:44.395 Cannot find device "nvmf_tgt_br2" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:44.395 Cannot find device "nvmf_init_br" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:44.395 Cannot find device "nvmf_init_br2" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:44.395 Cannot find device "nvmf_tgt_br" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:44.395 Cannot find device "nvmf_tgt_br2" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:44.395 Cannot find device "nvmf_br" 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:11:44.395 12:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:44.395 Cannot find device "nvmf_init_if" 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:44.395 Cannot find device "nvmf_init_if2" 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:44.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:44.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:44.395 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:44.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:44.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:11:44.655 00:11:44.655 --- 10.0.0.3 ping statistics --- 00:11:44.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.655 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:44.655 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:44.655 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:11:44.655 00:11:44.655 --- 10.0.0.4 ping statistics --- 00:11:44.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.655 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:44.655 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:44.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:44.655 00:11:44.655 --- 10.0.0.1 ping statistics --- 00:11:44.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.656 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:44.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:11:44.656 00:11:44.656 --- 10.0.0.2 ping statistics --- 00:11:44.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.656 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70434 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70434 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70434 ']' 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.656 12:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:44.915 [2024-11-15 12:46:53.355526] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:11:44.915 [2024-11-15 12:46:53.355640] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:44.915 [2024-11-15 12:46:53.520910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.175 [2024-11-15 12:46:53.593854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.175 [2024-11-15 12:46:53.593911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.175 [2024-11-15 12:46:53.593925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.175 [2024-11-15 12:46:53.593935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.175 [2024-11-15 12:46:53.593944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.175 [2024-11-15 12:46:53.595062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:45.175 [2024-11-15 12:46:53.595112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:45.175 [2024-11-15 12:46:53.595236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:45.175 [2024-11-15 12:46:53.595244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.175 [2024-11-15 12:46:53.601369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.743 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.743 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:11:45.743 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:45.743 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:46.003 [2024-11-15 12:46:54.454123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:46.003 Malloc0 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:46.003 [2024-11-15 12:46:54.493038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:46.003 { 00:11:46.003 "params": { 00:11:46.003 "name": "Nvme$subsystem", 00:11:46.003 "trtype": "$TEST_TRANSPORT", 00:11:46.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.003 "adrfam": "ipv4", 00:11:46.003 "trsvcid": "$NVMF_PORT", 00:11:46.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.003 "hdgst": ${hdgst:-false}, 00:11:46.003 "ddgst": ${ddgst:-false} 00:11:46.003 }, 00:11:46.003 "method": "bdev_nvme_attach_controller" 00:11:46.003 } 00:11:46.003 EOF 00:11:46.003 )") 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:11:46.003 12:46:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:46.003 "params": { 00:11:46.003 "name": "Nvme1", 00:11:46.003 "trtype": "tcp", 00:11:46.003 "traddr": "10.0.0.3", 00:11:46.003 "adrfam": "ipv4", 00:11:46.003 "trsvcid": "4420", 00:11:46.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:46.003 "hdgst": false, 00:11:46.003 "ddgst": false 00:11:46.003 }, 00:11:46.003 "method": "bdev_nvme_attach_controller" 00:11:46.003 }' 00:11:46.003 [2024-11-15 12:46:54.551002] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:11:46.003 [2024-11-15 12:46:54.551083] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70480 ] 00:11:46.261 [2024-11-15 12:46:54.708913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:46.261 [2024-11-15 12:46:54.782976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.261 [2024-11-15 12:46:54.783039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.261 [2024-11-15 12:46:54.783045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.261 [2024-11-15 12:46:54.797570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.520 I/O targets: 00:11:46.520 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:46.520 00:11:46.520 00:11:46.520 CUnit - A unit testing framework for C - Version 2.1-3 00:11:46.520 http://cunit.sourceforge.net/ 00:11:46.520 00:11:46.520 00:11:46.520 Suite: bdevio tests on: Nvme1n1 00:11:46.520 Test: blockdev write read block ...passed 00:11:46.520 Test: blockdev write zeroes read block ...passed 00:11:46.520 Test: blockdev write zeroes read no split ...passed 00:11:46.520 Test: blockdev write zeroes read split ...passed 00:11:46.520 Test: blockdev write zeroes read split partial ...passed 00:11:46.520 Test: blockdev reset ...[2024-11-15 12:46:55.024880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:46.520 [2024-11-15 12:46:55.024970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b8310 (9): Bad file descriptor 00:11:46.520 [2024-11-15 12:46:55.045412] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:46.520 passed 00:11:46.520 Test: blockdev write read 8 blocks ...passed 00:11:46.520 Test: blockdev write read size > 128k ...passed 00:11:46.520 Test: blockdev write read invalid size ...passed 00:11:46.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:46.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:46.520 Test: blockdev write read max offset ...passed 00:11:46.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:46.520 Test: blockdev writev readv 8 blocks ...passed 00:11:46.520 Test: blockdev writev readv 30 x 1block ...passed 00:11:46.520 Test: blockdev writev readv block ...passed 00:11:46.520 Test: blockdev writev readv size > 128k ...passed 00:11:46.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:46.520 Test: blockdev comparev and writev ...[2024-11-15 12:46:55.053660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.053723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.053749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.053763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.054273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.054313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.054341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.054361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.054808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.054842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.054865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.054879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.055375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.055410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.055441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:46.520 [2024-11-15 12:46:55.055455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:46.520 passed 00:11:46.520 Test: blockdev nvme passthru rw ...passed 00:11:46.520 Test: blockdev nvme passthru vendor specific ...[2024-11-15 12:46:55.056392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:46.520 [2024-11-15 12:46:55.056422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.056628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:46.520 [2024-11-15 12:46:55.056662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.056853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:46.520 [2024-11-15 12:46:55.056887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:46.520 [2024-11-15 12:46:55.057063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:46.520 [2024-11-15 12:46:55.057096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:46.520 passed 00:11:46.520 Test: blockdev nvme admin passthru ...passed 00:11:46.520 Test: blockdev copy ...passed 00:11:46.520 00:11:46.520 Run Summary: Type Total Ran Passed Failed Inactive 00:11:46.520 suites 1 1 n/a 0 0 00:11:46.520 tests 23 23 23 0 0 00:11:46.520 asserts 152 152 152 0 n/a 00:11:46.520 00:11:46.520 Elapsed time = 0.178 seconds 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.779 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.779 rmmod nvme_tcp 00:11:46.779 rmmod nvme_fabrics 00:11:47.038 rmmod nvme_keyring 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70434 ']' 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70434 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70434 ']' 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70434 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70434 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:47.038 killing process with pid 70434 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70434' 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70434 00:11:47.038 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70434 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:47.298 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:47.557 12:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:11:47.557 00:11:47.557 real 0m3.419s 00:11:47.557 user 0m10.367s 00:11:47.557 sys 0m1.277s 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:47.557 ************************************ 00:11:47.557 END TEST nvmf_bdevio_no_huge 00:11:47.557 ************************************ 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.557 ************************************ 00:11:47.557 START TEST nvmf_tls 00:11:47.557 ************************************ 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:47.557 * Looking for test storage... 00:11:47.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.557 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.817 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.817 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.817 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.817 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.817 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.817 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.817 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:47.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.818 --rc genhtml_branch_coverage=1 00:11:47.818 --rc genhtml_function_coverage=1 00:11:47.818 --rc genhtml_legend=1 00:11:47.818 --rc geninfo_all_blocks=1 00:11:47.818 --rc geninfo_unexecuted_blocks=1 00:11:47.818 00:11:47.818 ' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:47.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.818 --rc genhtml_branch_coverage=1 00:11:47.818 --rc genhtml_function_coverage=1 00:11:47.818 --rc genhtml_legend=1 00:11:47.818 --rc geninfo_all_blocks=1 00:11:47.818 --rc geninfo_unexecuted_blocks=1 00:11:47.818 00:11:47.818 ' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:47.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.818 --rc genhtml_branch_coverage=1 00:11:47.818 --rc genhtml_function_coverage=1 00:11:47.818 --rc genhtml_legend=1 00:11:47.818 --rc geninfo_all_blocks=1 00:11:47.818 --rc geninfo_unexecuted_blocks=1 00:11:47.818 00:11:47.818 ' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:47.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.818 --rc genhtml_branch_coverage=1 00:11:47.818 --rc genhtml_function_coverage=1 00:11:47.818 --rc genhtml_legend=1 00:11:47.818 --rc geninfo_all_blocks=1 00:11:47.818 --rc geninfo_unexecuted_blocks=1 00:11:47.818 00:11:47.818 ' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.818 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:47.818 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:47.819 Cannot find device "nvmf_init_br" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:47.819 Cannot find device "nvmf_init_br2" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:47.819 Cannot find device "nvmf_tgt_br" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.819 Cannot find device "nvmf_tgt_br2" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:47.819 Cannot find device "nvmf_init_br" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:47.819 Cannot find device "nvmf_init_br2" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:47.819 Cannot find device "nvmf_tgt_br" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:47.819 Cannot find device "nvmf_tgt_br2" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:47.819 Cannot find device "nvmf_br" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:47.819 Cannot find device "nvmf_init_if" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:47.819 Cannot find device "nvmf_init_if2" 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:11:47.819 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:48.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:48.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:48.078 00:11:48.078 --- 10.0.0.3 ping statistics --- 00:11:48.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.078 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:48.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:48.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:11:48.078 00:11:48.078 --- 10.0.0.4 ping statistics --- 00:11:48.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.078 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:48.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:48.078 00:11:48.078 --- 10.0.0.1 ping statistics --- 00:11:48.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.078 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:48.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:11:48.078 00:11:48.078 --- 10.0.0.2 ping statistics --- 00:11:48.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.078 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.078 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:48.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70721 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70721 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70721 ']' 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.338 12:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:48.338 [2024-11-15 12:46:56.800523] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:11:48.338 [2024-11-15 12:46:56.800791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.338 [2024-11-15 12:46:56.946961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.338 [2024-11-15 12:46:56.984906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.338 [2024-11-15 12:46:56.985196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.338 [2024-11-15 12:46:56.985429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.338 [2024-11-15 12:46:56.985598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.338 [2024-11-15 12:46:56.985850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.338 [2024-11-15 12:46:56.986264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:11:48.597 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:48.856 true 00:11:48.856 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:48.856 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:11:49.115 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:11:49.115 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:11:49.115 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:49.374 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:11:49.374 12:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:49.632 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:11:49.632 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:11:49.632 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:49.891 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:49.891 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:11:50.150 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:11:50.150 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:11:50.150 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:50.150 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:11:50.150 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:11:50.150 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:11:50.150 12:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:50.409 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:50.409 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:50.667 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:11:50.667 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:11:50.667 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:50.926 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:50.926 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZLyf1hE94y 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.GrdLsCmJgo 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZLyf1hE94y 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.GrdLsCmJgo 00:11:51.185 12:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:51.444 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:51.702 [2024-11-15 12:47:00.299769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:51.702 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZLyf1hE94y 00:11:51.702 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZLyf1hE94y 00:11:51.702 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:51.961 [2024-11-15 12:47:00.536178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.961 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:52.220 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:11:52.478 [2024-11-15 12:47:01.004296] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:52.478 [2024-11-15 12:47:01.004503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:52.478 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:52.737 malloc0 00:11:52.737 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:52.995 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZLyf1hE94y 00:11:53.254 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:11:53.254 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZLyf1hE94y 00:12:05.458 Initializing NVMe Controllers 00:12:05.458 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:05.458 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:05.458 Initialization complete. Launching workers. 00:12:05.458 ======================================================== 00:12:05.458 Latency(us) 00:12:05.458 Device Information : IOPS MiB/s Average min max 00:12:05.458 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11629.08 45.43 5504.49 856.47 8154.44 00:12:05.458 ======================================================== 00:12:05.458 Total : 11629.08 45.43 5504.49 856.47 8154.44 00:12:05.458 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZLyf1hE94y 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZLyf1hE94y 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70943 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70943 /var/tmp/bdevperf.sock 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70943 ']' 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:05.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:05.458 [2024-11-15 12:47:12.144182] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:05.458 [2024-11-15 12:47:12.144476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70943 ] 00:12:05.458 [2024-11-15 12:47:12.297382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.458 [2024-11-15 12:47:12.336734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.458 [2024-11-15 12:47:12.370664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZLyf1hE94y 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:05.458 [2024-11-15 12:47:12.893662] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:05.458 TLSTESTn1 00:12:05.458 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:05.458 Running I/O for 10 seconds... 00:12:06.836 4864.00 IOPS, 19.00 MiB/s [2024-11-15T12:47:16.075Z] 4873.00 IOPS, 19.04 MiB/s [2024-11-15T12:47:17.453Z] 4885.00 IOPS, 19.08 MiB/s [2024-11-15T12:47:18.391Z] 4886.00 IOPS, 19.09 MiB/s [2024-11-15T12:47:19.329Z] 4893.80 IOPS, 19.12 MiB/s [2024-11-15T12:47:20.266Z] 4902.50 IOPS, 19.15 MiB/s [2024-11-15T12:47:21.203Z] 4901.43 IOPS, 19.15 MiB/s [2024-11-15T12:47:22.139Z] 4898.12 IOPS, 19.13 MiB/s [2024-11-15T12:47:23.076Z] 4899.11 IOPS, 19.14 MiB/s [2024-11-15T12:47:23.335Z] 4900.50 IOPS, 19.14 MiB/s 00:12:14.665 Latency(us) 00:12:14.665 [2024-11-15T12:47:23.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.665 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:14.665 Verification LBA range: start 0x0 length 0x2000 00:12:14.665 TLSTESTn1 : 10.02 4905.13 19.16 0.00 0.00 26050.03 4944.99 20852.36 00:12:14.665 [2024-11-15T12:47:23.335Z] =================================================================================================================== 00:12:14.665 [2024-11-15T12:47:23.335Z] Total : 4905.13 19.16 0.00 0.00 26050.03 4944.99 20852.36 00:12:14.665 { 00:12:14.665 "results": [ 00:12:14.665 { 00:12:14.665 "job": "TLSTESTn1", 00:12:14.665 "core_mask": "0x4", 00:12:14.665 "workload": "verify", 00:12:14.665 "status": "finished", 00:12:14.665 "verify_range": { 00:12:14.665 "start": 0, 00:12:14.665 "length": 8192 00:12:14.665 }, 00:12:14.665 "queue_depth": 128, 00:12:14.665 "io_size": 4096, 00:12:14.665 "runtime": 10.016239, 00:12:14.665 "iops": 4905.134552001005, 00:12:14.665 "mibps": 19.160681843753927, 00:12:14.665 "io_failed": 0, 00:12:14.665 "io_timeout": 0, 00:12:14.665 "avg_latency_us": 26050.03310822088, 00:12:14.665 "min_latency_us": 4944.989090909091, 00:12:14.665 "max_latency_us": 20852.363636363636 00:12:14.665 } 00:12:14.665 ], 00:12:14.665 "core_count": 1 00:12:14.665 } 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70943 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70943 ']' 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70943 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70943 00:12:14.665 killing process with pid 70943 00:12:14.665 Received shutdown signal, test time was about 10.000000 seconds 00:12:14.665 00:12:14.665 Latency(us) 00:12:14.665 [2024-11-15T12:47:23.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.665 [2024-11-15T12:47:23.335Z] =================================================================================================================== 00:12:14.665 [2024-11-15T12:47:23.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70943' 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70943 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70943 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GrdLsCmJgo 00:12:14.665 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GrdLsCmJgo 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GrdLsCmJgo 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GrdLsCmJgo 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71070 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71070 /var/tmp/bdevperf.sock 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71070 ']' 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.666 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:14.666 [2024-11-15 12:47:23.313511] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:14.666 [2024-11-15 12:47:23.313784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71070 ] 00:12:14.936 [2024-11-15 12:47:23.451645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.936 [2024-11-15 12:47:23.480770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.936 [2024-11-15 12:47:23.509392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.936 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.936 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:14.936 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GrdLsCmJgo 00:12:15.208 12:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:15.468 [2024-11-15 12:47:24.059441] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:15.468 [2024-11-15 12:47:24.068284] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:15.468 [2024-11-15 12:47:24.068819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138bfb0 (107): Transport endpoint is not connected 00:12:15.468 [2024-11-15 12:47:24.069811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138bfb0 (9): Bad file descriptor 00:12:15.468 [2024-11-15 12:47:24.070810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:15.468 [2024-11-15 12:47:24.070833] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:15.468 [2024-11-15 12:47:24.070844] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:15.468 [2024-11-15 12:47:24.070860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:15.468 request: 00:12:15.468 { 00:12:15.468 "name": "TLSTEST", 00:12:15.468 "trtype": "tcp", 00:12:15.468 "traddr": "10.0.0.3", 00:12:15.468 "adrfam": "ipv4", 00:12:15.468 "trsvcid": "4420", 00:12:15.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.468 "prchk_reftag": false, 00:12:15.468 "prchk_guard": false, 00:12:15.468 "hdgst": false, 00:12:15.468 "ddgst": false, 00:12:15.468 "psk": "key0", 00:12:15.468 "allow_unrecognized_csi": false, 00:12:15.468 "method": "bdev_nvme_attach_controller", 00:12:15.468 "req_id": 1 00:12:15.468 } 00:12:15.468 Got JSON-RPC error response 00:12:15.468 response: 00:12:15.468 { 00:12:15.468 "code": -5, 00:12:15.468 "message": "Input/output error" 00:12:15.468 } 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71070 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71070 ']' 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71070 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71070 00:12:15.468 killing process with pid 71070 00:12:15.468 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.468 00:12:15.468 Latency(us) 00:12:15.468 [2024-11-15T12:47:24.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.468 [2024-11-15T12:47:24.138Z] =================================================================================================================== 00:12:15.468 [2024-11-15T12:47:24.138Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71070' 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71070 00:12:15.468 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71070 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZLyf1hE94y 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZLyf1hE94y 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZLyf1hE94y 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZLyf1hE94y 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71091 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71091 /var/tmp/bdevperf.sock 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71091 ']' 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.727 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.728 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.728 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:15.728 [2024-11-15 12:47:24.284347] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:15.728 [2024-11-15 12:47:24.284583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71091 ] 00:12:15.987 [2024-11-15 12:47:24.422690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.987 [2024-11-15 12:47:24.452283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.987 [2024-11-15 12:47:24.480365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.987 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.987 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:15.987 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZLyf1hE94y 00:12:16.246 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:12:16.505 [2024-11-15 12:47:24.954309] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:16.505 [2024-11-15 12:47:24.958989] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:16.505 [2024-11-15 12:47:24.959180] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:16.505 [2024-11-15 12:47:24.959248] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:16.505 [2024-11-15 12:47:24.959790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e7fb0 (107): Transport endpoint is not connected 00:12:16.505 [2024-11-15 12:47:24.960779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e7fb0 (9): Bad file descriptor 00:12:16.505 [2024-11-15 12:47:24.961776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:16.505 [2024-11-15 12:47:24.961801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:16.505 [2024-11-15 12:47:24.961813] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:16.505 [2024-11-15 12:47:24.961828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:16.505 request: 00:12:16.505 { 00:12:16.505 "name": "TLSTEST", 00:12:16.505 "trtype": "tcp", 00:12:16.505 "traddr": "10.0.0.3", 00:12:16.505 "adrfam": "ipv4", 00:12:16.505 "trsvcid": "4420", 00:12:16.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.505 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:16.505 "prchk_reftag": false, 00:12:16.505 "prchk_guard": false, 00:12:16.505 "hdgst": false, 00:12:16.505 "ddgst": false, 00:12:16.505 "psk": "key0", 00:12:16.505 "allow_unrecognized_csi": false, 00:12:16.505 "method": "bdev_nvme_attach_controller", 00:12:16.505 "req_id": 1 00:12:16.505 } 00:12:16.505 Got JSON-RPC error response 00:12:16.505 response: 00:12:16.505 { 00:12:16.505 "code": -5, 00:12:16.505 "message": "Input/output error" 00:12:16.505 } 00:12:16.505 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71091 00:12:16.505 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71091 ']' 00:12:16.505 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71091 00:12:16.505 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:16.505 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.505 12:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71091 00:12:16.505 killing process with pid 71091 00:12:16.505 Received shutdown signal, test time was about 10.000000 seconds 00:12:16.505 00:12:16.506 Latency(us) 00:12:16.506 [2024-11-15T12:47:25.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.506 [2024-11-15T12:47:25.176Z] =================================================================================================================== 00:12:16.506 [2024-11-15T12:47:25.176Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71091' 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71091 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71091 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZLyf1hE94y 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZLyf1hE94y 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZLyf1hE94y 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZLyf1hE94y 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71111 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71111 /var/tmp/bdevperf.sock 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71111 ']' 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:16.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.506 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:16.766 [2024-11-15 12:47:25.182474] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:16.766 [2024-11-15 12:47:25.182743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71111 ] 00:12:16.766 [2024-11-15 12:47:25.328961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.766 [2024-11-15 12:47:25.357948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.766 [2024-11-15 12:47:25.386866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.025 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.025 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:17.025 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZLyf1hE94y 00:12:17.025 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:17.285 [2024-11-15 12:47:25.858898] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:17.285 [2024-11-15 12:47:25.868436] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:17.285 [2024-11-15 12:47:25.868683] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:17.285 [2024-11-15 12:47:25.868787] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:17.285 [2024-11-15 12:47:25.869292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bfb0 (107): Transport endpoint is not connected 00:12:17.285 [2024-11-15 12:47:25.870284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bfb0 (9): Bad file descriptor 00:12:17.285 [2024-11-15 12:47:25.871281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:12:17.285 [2024-11-15 12:47:25.871298] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:17.285 [2024-11-15 12:47:25.871307] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:12:17.285 [2024-11-15 12:47:25.871322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:12:17.285 request: 00:12:17.285 { 00:12:17.285 "name": "TLSTEST", 00:12:17.285 "trtype": "tcp", 00:12:17.285 "traddr": "10.0.0.3", 00:12:17.285 "adrfam": "ipv4", 00:12:17.285 "trsvcid": "4420", 00:12:17.285 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:17.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:17.285 "prchk_reftag": false, 00:12:17.285 "prchk_guard": false, 00:12:17.285 "hdgst": false, 00:12:17.285 "ddgst": false, 00:12:17.285 "psk": "key0", 00:12:17.285 "allow_unrecognized_csi": false, 00:12:17.285 "method": "bdev_nvme_attach_controller", 00:12:17.285 "req_id": 1 00:12:17.285 } 00:12:17.285 Got JSON-RPC error response 00:12:17.285 response: 00:12:17.285 { 00:12:17.285 "code": -5, 00:12:17.285 "message": "Input/output error" 00:12:17.285 } 00:12:17.285 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71111 00:12:17.285 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71111 ']' 00:12:17.285 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71111 00:12:17.285 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:17.286 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.286 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71111 00:12:17.286 killing process with pid 71111 00:12:17.286 Received shutdown signal, test time was about 10.000000 seconds 00:12:17.286 00:12:17.286 Latency(us) 00:12:17.286 [2024-11-15T12:47:25.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.286 [2024-11-15T12:47:25.956Z] =================================================================================================================== 00:12:17.286 [2024-11-15T12:47:25.956Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:17.286 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:17.286 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:17.286 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71111' 00:12:17.286 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71111 00:12:17.286 12:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71111 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:17.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:17.545 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71127 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71127 /var/tmp/bdevperf.sock 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71127 ']' 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.546 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:17.546 [2024-11-15 12:47:26.100738] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:17.546 [2024-11-15 12:47:26.100983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71127 ] 00:12:17.805 [2024-11-15 12:47:26.240720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.805 [2024-11-15 12:47:26.269588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.805 [2024-11-15 12:47:26.298099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.805 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.805 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:17.805 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:12:18.063 [2024-11-15 12:47:26.555921] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:12:18.063 [2024-11-15 12:47:26.556125] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:18.063 request: 00:12:18.063 { 00:12:18.063 "name": "key0", 00:12:18.063 "path": "", 00:12:18.063 "method": "keyring_file_add_key", 00:12:18.063 "req_id": 1 00:12:18.063 } 00:12:18.063 Got JSON-RPC error response 00:12:18.063 response: 00:12:18.063 { 00:12:18.063 "code": -1, 00:12:18.063 "message": "Operation not permitted" 00:12:18.063 } 00:12:18.063 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:18.322 [2024-11-15 12:47:26.840081] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:18.322 [2024-11-15 12:47:26.840319] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:18.322 request: 00:12:18.322 { 00:12:18.322 "name": "TLSTEST", 00:12:18.322 "trtype": "tcp", 00:12:18.322 "traddr": "10.0.0.3", 00:12:18.322 "adrfam": "ipv4", 00:12:18.322 "trsvcid": "4420", 00:12:18.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:18.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:18.322 "prchk_reftag": false, 00:12:18.322 "prchk_guard": false, 00:12:18.322 "hdgst": false, 00:12:18.322 "ddgst": false, 00:12:18.322 "psk": "key0", 00:12:18.322 "allow_unrecognized_csi": false, 00:12:18.322 "method": "bdev_nvme_attach_controller", 00:12:18.322 "req_id": 1 00:12:18.322 } 00:12:18.322 Got JSON-RPC error response 00:12:18.322 response: 00:12:18.322 { 00:12:18.322 "code": -126, 00:12:18.322 "message": "Required key not available" 00:12:18.322 } 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71127 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71127 ']' 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71127 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71127 00:12:18.322 killing process with pid 71127 00:12:18.322 Received shutdown signal, test time was about 10.000000 seconds 00:12:18.322 00:12:18.322 Latency(us) 00:12:18.322 [2024-11-15T12:47:26.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.322 [2024-11-15T12:47:26.992Z] =================================================================================================================== 00:12:18.322 [2024-11-15T12:47:26.992Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71127' 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71127 00:12:18.322 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71127 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70721 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70721 ']' 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70721 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70721 00:12:18.581 killing process with pid 70721 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70721' 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70721 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70721 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:12:18.581 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.fCevzAXk6f 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.fCevzAXk6f 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71166 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71166 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71166 ']' 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.582 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:18.840 [2024-11-15 12:47:27.280183] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:18.840 [2024-11-15 12:47:27.280704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.840 [2024-11-15 12:47:27.426357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.840 [2024-11-15 12:47:27.452830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.840 [2024-11-15 12:47:27.452881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.840 [2024-11-15 12:47:27.452907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.840 [2024-11-15 12:47:27.452914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.840 [2024-11-15 12:47:27.452920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.840 [2024-11-15 12:47:27.453174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.840 [2024-11-15 12:47:27.479200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.fCevzAXk6f 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fCevzAXk6f 00:12:19.099 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:19.358 [2024-11-15 12:47:27.835954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.358 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:19.617 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:19.617 [2024-11-15 12:47:28.276023] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:19.617 [2024-11-15 12:47:28.276203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:19.875 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:20.134 malloc0 00:12:20.134 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:20.134 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:20.393 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCevzAXk6f 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fCevzAXk6f 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71213 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71213 /var/tmp/bdevperf.sock 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71213 ']' 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.652 12:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:20.911 [2024-11-15 12:47:29.327199] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:20.911 [2024-11-15 12:47:29.327279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71213 ] 00:12:20.911 [2024-11-15 12:47:29.469666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.911 [2024-11-15 12:47:29.498541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.911 [2024-11-15 12:47:29.527838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:21.854 12:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.854 12:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:21.854 12:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:21.854 12:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:22.113 [2024-11-15 12:47:30.737513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:22.372 TLSTESTn1 00:12:22.373 12:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:22.373 Running I/O for 10 seconds... 00:12:24.687 4822.00 IOPS, 18.84 MiB/s [2024-11-15T12:47:33.925Z] 4860.50 IOPS, 18.99 MiB/s [2024-11-15T12:47:35.303Z] 4870.33 IOPS, 19.02 MiB/s [2024-11-15T12:47:36.240Z] 4890.00 IOPS, 19.10 MiB/s [2024-11-15T12:47:37.178Z] 4895.60 IOPS, 19.12 MiB/s [2024-11-15T12:47:38.116Z] 4902.67 IOPS, 19.15 MiB/s [2024-11-15T12:47:39.053Z] 4906.57 IOPS, 19.17 MiB/s [2024-11-15T12:47:39.990Z] 4906.38 IOPS, 19.17 MiB/s [2024-11-15T12:47:40.926Z] 4903.78 IOPS, 19.16 MiB/s [2024-11-15T12:47:41.185Z] 4895.30 IOPS, 19.12 MiB/s 00:12:32.515 Latency(us) 00:12:32.515 [2024-11-15T12:47:41.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.515 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:32.515 Verification LBA range: start 0x0 length 0x2000 00:12:32.515 TLSTESTn1 : 10.01 4901.00 19.14 0.00 0.00 26074.07 4587.52 21090.68 00:12:32.515 [2024-11-15T12:47:41.185Z] =================================================================================================================== 00:12:32.515 [2024-11-15T12:47:41.185Z] Total : 4901.00 19.14 0.00 0.00 26074.07 4587.52 21090.68 00:12:32.515 { 00:12:32.515 "results": [ 00:12:32.515 { 00:12:32.515 "job": "TLSTESTn1", 00:12:32.515 "core_mask": "0x4", 00:12:32.515 "workload": "verify", 00:12:32.515 "status": "finished", 00:12:32.515 "verify_range": { 00:12:32.515 "start": 0, 00:12:32.515 "length": 8192 00:12:32.515 }, 00:12:32.515 "queue_depth": 128, 00:12:32.515 "io_size": 4096, 00:12:32.515 "runtime": 10.014482, 00:12:32.515 "iops": 4901.0023683701265, 00:12:32.515 "mibps": 19.144540501445807, 00:12:32.515 "io_failed": 0, 00:12:32.515 "io_timeout": 0, 00:12:32.515 "avg_latency_us": 26074.06850345718, 00:12:32.515 "min_latency_us": 4587.52, 00:12:32.515 "max_latency_us": 21090.676363636365 00:12:32.515 } 00:12:32.515 ], 00:12:32.515 "core_count": 1 00:12:32.515 } 00:12:32.515 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:32.515 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71213 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71213 ']' 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71213 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71213 00:12:32.516 killing process with pid 71213 00:12:32.516 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.516 00:12:32.516 Latency(us) 00:12:32.516 [2024-11-15T12:47:41.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.516 [2024-11-15T12:47:41.186Z] =================================================================================================================== 00:12:32.516 [2024-11-15T12:47:41.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71213' 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71213 00:12:32.516 12:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71213 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.fCevzAXk6f 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCevzAXk6f 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCevzAXk6f 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCevzAXk6f 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fCevzAXk6f 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71344 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71344 /var/tmp/bdevperf.sock 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71344 ']' 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:32.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.516 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:32.516 [2024-11-15 12:47:41.176099] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:32.516 [2024-11-15 12:47:41.176213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71344 ] 00:12:32.775 [2024-11-15 12:47:41.321453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.775 [2024-11-15 12:47:41.350201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.775 [2024-11-15 12:47:41.378507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:32.775 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.775 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:32.775 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:33.034 [2024-11-15 12:47:41.671627] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fCevzAXk6f': 0100666 00:12:33.034 [2024-11-15 12:47:41.671675] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:33.034 request: 00:12:33.034 { 00:12:33.034 "name": "key0", 00:12:33.034 "path": "/tmp/tmp.fCevzAXk6f", 00:12:33.034 "method": "keyring_file_add_key", 00:12:33.034 "req_id": 1 00:12:33.034 } 00:12:33.034 Got JSON-RPC error response 00:12:33.034 response: 00:12:33.034 { 00:12:33.034 "code": -1, 00:12:33.034 "message": "Operation not permitted" 00:12:33.034 } 00:12:33.034 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:33.294 [2024-11-15 12:47:41.959778] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:33.294 [2024-11-15 12:47:41.959861] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:33.554 request: 00:12:33.554 { 00:12:33.554 "name": "TLSTEST", 00:12:33.554 "trtype": "tcp", 00:12:33.554 "traddr": "10.0.0.3", 00:12:33.554 "adrfam": "ipv4", 00:12:33.554 "trsvcid": "4420", 00:12:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.554 "prchk_reftag": false, 00:12:33.554 "prchk_guard": false, 00:12:33.554 "hdgst": false, 00:12:33.554 "ddgst": false, 00:12:33.554 "psk": "key0", 00:12:33.554 "allow_unrecognized_csi": false, 00:12:33.554 "method": "bdev_nvme_attach_controller", 00:12:33.554 "req_id": 1 00:12:33.554 } 00:12:33.554 Got JSON-RPC error response 00:12:33.554 response: 00:12:33.554 { 00:12:33.554 "code": -126, 00:12:33.554 "message": "Required key not available" 00:12:33.554 } 00:12:33.554 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71344 00:12:33.554 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71344 ']' 00:12:33.554 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71344 00:12:33.554 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:33.554 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.554 12:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71344 00:12:33.554 killing process with pid 71344 00:12:33.554 Received shutdown signal, test time was about 10.000000 seconds 00:12:33.554 00:12:33.554 Latency(us) 00:12:33.554 [2024-11-15T12:47:42.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.554 [2024-11-15T12:47:42.224Z] =================================================================================================================== 00:12:33.554 [2024-11-15T12:47:42.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71344' 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71344 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71344 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71166 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71166 ']' 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71166 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71166 00:12:33.554 killing process with pid 71166 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71166' 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71166 00:12:33.554 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71166 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71370 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71370 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71370 ']' 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.813 12:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:33.813 [2024-11-15 12:47:42.356992] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:33.813 [2024-11-15 12:47:42.357102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.071 [2024-11-15 12:47:42.501250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.071 [2024-11-15 12:47:42.528920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.071 [2024-11-15 12:47:42.528991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.072 [2024-11-15 12:47:42.529001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.072 [2024-11-15 12:47:42.529008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.072 [2024-11-15 12:47:42.529014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.072 [2024-11-15 12:47:42.529255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.072 [2024-11-15 12:47:42.557529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.639 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.639 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:34.639 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.fCevzAXk6f 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fCevzAXk6f 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.fCevzAXk6f 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fCevzAXk6f 00:12:34.640 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:34.899 [2024-11-15 12:47:43.474332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.899 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:35.157 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:35.415 [2024-11-15 12:47:43.970410] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:35.415 [2024-11-15 12:47:43.970607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:35.415 12:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:35.674 malloc0 00:12:35.674 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:35.933 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:36.190 [2024-11-15 12:47:44.727542] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fCevzAXk6f': 0100666 00:12:36.190 [2024-11-15 12:47:44.727577] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:36.190 request: 00:12:36.190 { 00:12:36.190 "name": "key0", 00:12:36.190 "path": "/tmp/tmp.fCevzAXk6f", 00:12:36.190 "method": "keyring_file_add_key", 00:12:36.190 "req_id": 1 00:12:36.190 } 00:12:36.190 Got JSON-RPC error response 00:12:36.190 response: 00:12:36.190 { 00:12:36.190 "code": -1, 00:12:36.190 "message": "Operation not permitted" 00:12:36.190 } 00:12:36.190 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:36.449 [2024-11-15 12:47:44.935619] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:12:36.449 [2024-11-15 12:47:44.935682] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:36.449 request: 00:12:36.449 { 00:12:36.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.449 "host": "nqn.2016-06.io.spdk:host1", 00:12:36.449 "psk": "key0", 00:12:36.449 "method": "nvmf_subsystem_add_host", 00:12:36.449 "req_id": 1 00:12:36.449 } 00:12:36.449 Got JSON-RPC error response 00:12:36.449 response: 00:12:36.449 { 00:12:36.449 "code": -32603, 00:12:36.449 "message": "Internal error" 00:12:36.449 } 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71370 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71370 ']' 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71370 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71370 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:36.449 killing process with pid 71370 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71370' 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71370 00:12:36.449 12:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71370 00:12:36.449 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.fCevzAXk6f 00:12:36.449 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:12:36.449 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.449 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.449 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71439 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71439 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71439 ']' 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.709 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:36.709 [2024-11-15 12:47:45.166252] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:36.709 [2024-11-15 12:47:45.166334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.709 [2024-11-15 12:47:45.310008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.709 [2024-11-15 12:47:45.336463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.709 [2024-11-15 12:47:45.336532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.709 [2024-11-15 12:47:45.336557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.709 [2024-11-15 12:47:45.336564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.709 [2024-11-15 12:47:45.336569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.709 [2024-11-15 12:47:45.336841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.709 [2024-11-15 12:47:45.363495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.fCevzAXk6f 00:12:36.968 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fCevzAXk6f 00:12:36.969 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:37.228 [2024-11-15 12:47:45.689441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.228 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:37.488 12:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:37.747 [2024-11-15 12:47:46.185564] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:37.747 [2024-11-15 12:47:46.185849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:37.748 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:38.007 malloc0 00:12:38.007 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:38.266 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:38.266 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71482 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71482 /var/tmp/bdevperf.sock 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71482 ']' 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.525 12:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.785 [2024-11-15 12:47:47.231734] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:38.785 [2024-11-15 12:47:47.231835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:12:38.785 [2024-11-15 12:47:47.385287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.785 [2024-11-15 12:47:47.425136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.043 [2024-11-15 12:47:47.459452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:39.610 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.610 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:39.610 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:39.871 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:40.166 [2024-11-15 12:47:48.601342] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:40.166 TLSTESTn1 00:12:40.166 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:40.430 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:12:40.430 "subsystems": [ 00:12:40.430 { 00:12:40.430 "subsystem": "keyring", 00:12:40.430 "config": [ 00:12:40.430 { 00:12:40.430 "method": "keyring_file_add_key", 00:12:40.430 "params": { 00:12:40.430 "name": "key0", 00:12:40.430 "path": "/tmp/tmp.fCevzAXk6f" 00:12:40.430 } 00:12:40.430 } 00:12:40.430 ] 00:12:40.430 }, 00:12:40.430 { 00:12:40.430 "subsystem": "iobuf", 00:12:40.430 "config": [ 00:12:40.430 { 00:12:40.430 "method": "iobuf_set_options", 00:12:40.430 "params": { 00:12:40.430 "small_pool_count": 8192, 00:12:40.430 "large_pool_count": 1024, 00:12:40.430 "small_bufsize": 8192, 00:12:40.430 "large_bufsize": 135168, 00:12:40.430 "enable_numa": false 00:12:40.430 } 00:12:40.430 } 00:12:40.430 ] 00:12:40.430 }, 00:12:40.430 { 00:12:40.430 "subsystem": "sock", 00:12:40.430 "config": [ 00:12:40.430 { 00:12:40.430 "method": "sock_set_default_impl", 00:12:40.430 "params": { 00:12:40.430 "impl_name": "uring" 00:12:40.430 } 00:12:40.430 }, 00:12:40.430 { 00:12:40.430 "method": "sock_impl_set_options", 00:12:40.430 "params": { 00:12:40.430 "impl_name": "ssl", 00:12:40.430 "recv_buf_size": 4096, 00:12:40.430 "send_buf_size": 4096, 00:12:40.430 "enable_recv_pipe": true, 00:12:40.430 "enable_quickack": false, 00:12:40.430 "enable_placement_id": 0, 00:12:40.431 "enable_zerocopy_send_server": true, 00:12:40.431 "enable_zerocopy_send_client": false, 00:12:40.431 "zerocopy_threshold": 0, 00:12:40.431 "tls_version": 0, 00:12:40.431 "enable_ktls": false 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "sock_impl_set_options", 00:12:40.431 "params": { 00:12:40.431 "impl_name": "posix", 00:12:40.431 "recv_buf_size": 2097152, 00:12:40.431 "send_buf_size": 2097152, 00:12:40.431 "enable_recv_pipe": true, 00:12:40.431 "enable_quickack": false, 00:12:40.431 "enable_placement_id": 0, 00:12:40.431 "enable_zerocopy_send_server": true, 00:12:40.431 "enable_zerocopy_send_client": false, 00:12:40.431 "zerocopy_threshold": 0, 00:12:40.431 "tls_version": 0, 00:12:40.431 "enable_ktls": false 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "sock_impl_set_options", 00:12:40.431 "params": { 00:12:40.431 "impl_name": "uring", 00:12:40.431 "recv_buf_size": 2097152, 00:12:40.431 "send_buf_size": 2097152, 00:12:40.431 "enable_recv_pipe": true, 00:12:40.431 "enable_quickack": false, 00:12:40.431 "enable_placement_id": 0, 00:12:40.431 "enable_zerocopy_send_server": false, 00:12:40.431 "enable_zerocopy_send_client": false, 00:12:40.431 "zerocopy_threshold": 0, 00:12:40.431 "tls_version": 0, 00:12:40.431 "enable_ktls": false 00:12:40.431 } 00:12:40.431 } 00:12:40.431 ] 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "subsystem": "vmd", 00:12:40.431 "config": [] 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "subsystem": "accel", 00:12:40.431 "config": [ 00:12:40.431 { 00:12:40.431 "method": "accel_set_options", 00:12:40.431 "params": { 00:12:40.431 "small_cache_size": 128, 00:12:40.431 "large_cache_size": 16, 00:12:40.431 "task_count": 2048, 00:12:40.431 "sequence_count": 2048, 00:12:40.431 "buf_count": 2048 00:12:40.431 } 00:12:40.431 } 00:12:40.431 ] 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "subsystem": "bdev", 00:12:40.431 "config": [ 00:12:40.431 { 00:12:40.431 "method": "bdev_set_options", 00:12:40.431 "params": { 00:12:40.431 "bdev_io_pool_size": 65535, 00:12:40.431 "bdev_io_cache_size": 256, 00:12:40.431 "bdev_auto_examine": true, 00:12:40.431 "iobuf_small_cache_size": 128, 00:12:40.431 "iobuf_large_cache_size": 16 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "bdev_raid_set_options", 00:12:40.431 "params": { 00:12:40.431 "process_window_size_kb": 1024, 00:12:40.431 "process_max_bandwidth_mb_sec": 0 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "bdev_iscsi_set_options", 00:12:40.431 "params": { 00:12:40.431 "timeout_sec": 30 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "bdev_nvme_set_options", 00:12:40.431 "params": { 00:12:40.431 "action_on_timeout": "none", 00:12:40.431 "timeout_us": 0, 00:12:40.431 "timeout_admin_us": 0, 00:12:40.431 "keep_alive_timeout_ms": 10000, 00:12:40.431 "arbitration_burst": 0, 00:12:40.431 "low_priority_weight": 0, 00:12:40.431 "medium_priority_weight": 0, 00:12:40.431 "high_priority_weight": 0, 00:12:40.431 "nvme_adminq_poll_period_us": 10000, 00:12:40.431 "nvme_ioq_poll_period_us": 0, 00:12:40.431 "io_queue_requests": 0, 00:12:40.431 "delay_cmd_submit": true, 00:12:40.431 "transport_retry_count": 4, 00:12:40.431 "bdev_retry_count": 3, 00:12:40.431 "transport_ack_timeout": 0, 00:12:40.431 "ctrlr_loss_timeout_sec": 0, 00:12:40.431 "reconnect_delay_sec": 0, 00:12:40.431 "fast_io_fail_timeout_sec": 0, 00:12:40.431 "disable_auto_failback": false, 00:12:40.431 "generate_uuids": false, 00:12:40.431 "transport_tos": 0, 00:12:40.431 "nvme_error_stat": false, 00:12:40.431 "rdma_srq_size": 0, 00:12:40.431 "io_path_stat": false, 00:12:40.431 "allow_accel_sequence": false, 00:12:40.431 "rdma_max_cq_size": 0, 00:12:40.431 "rdma_cm_event_timeout_ms": 0, 00:12:40.431 "dhchap_digests": [ 00:12:40.431 "sha256", 00:12:40.431 "sha384", 00:12:40.431 "sha512" 00:12:40.431 ], 00:12:40.431 "dhchap_dhgroups": [ 00:12:40.431 "null", 00:12:40.431 "ffdhe2048", 00:12:40.431 "ffdhe3072", 00:12:40.431 "ffdhe4096", 00:12:40.431 "ffdhe6144", 00:12:40.431 "ffdhe8192" 00:12:40.431 ] 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "bdev_nvme_set_hotplug", 00:12:40.431 "params": { 00:12:40.431 "period_us": 100000, 00:12:40.431 "enable": false 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "bdev_malloc_create", 00:12:40.431 "params": { 00:12:40.431 "name": "malloc0", 00:12:40.431 "num_blocks": 8192, 00:12:40.431 "block_size": 4096, 00:12:40.431 "physical_block_size": 4096, 00:12:40.431 "uuid": "95c4d154-757b-4541-8e01-b3afb7e1089b", 00:12:40.431 "optimal_io_boundary": 0, 00:12:40.431 "md_size": 0, 00:12:40.431 "dif_type": 0, 00:12:40.431 "dif_is_head_of_md": false, 00:12:40.431 "dif_pi_format": 0 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "bdev_wait_for_examine" 00:12:40.431 } 00:12:40.431 ] 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "subsystem": "nbd", 00:12:40.431 "config": [] 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "subsystem": "scheduler", 00:12:40.431 "config": [ 00:12:40.431 { 00:12:40.431 "method": "framework_set_scheduler", 00:12:40.431 "params": { 00:12:40.431 "name": "static" 00:12:40.431 } 00:12:40.431 } 00:12:40.431 ] 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "subsystem": "nvmf", 00:12:40.431 "config": [ 00:12:40.431 { 00:12:40.431 "method": "nvmf_set_config", 00:12:40.431 "params": { 00:12:40.431 "discovery_filter": "match_any", 00:12:40.431 "admin_cmd_passthru": { 00:12:40.431 "identify_ctrlr": false 00:12:40.431 }, 00:12:40.431 "dhchap_digests": [ 00:12:40.431 "sha256", 00:12:40.431 "sha384", 00:12:40.431 "sha512" 00:12:40.431 ], 00:12:40.431 "dhchap_dhgroups": [ 00:12:40.431 "null", 00:12:40.431 "ffdhe2048", 00:12:40.431 "ffdhe3072", 00:12:40.431 "ffdhe4096", 00:12:40.431 "ffdhe6144", 00:12:40.431 "ffdhe8192" 00:12:40.431 ] 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "nvmf_set_max_subsystems", 00:12:40.431 "params": { 00:12:40.431 "max_subsystems": 1024 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "nvmf_set_crdt", 00:12:40.431 "params": { 00:12:40.431 "crdt1": 0, 00:12:40.431 "crdt2": 0, 00:12:40.431 "crdt3": 0 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "nvmf_create_transport", 00:12:40.431 "params": { 00:12:40.431 "trtype": "TCP", 00:12:40.431 "max_queue_depth": 128, 00:12:40.431 "max_io_qpairs_per_ctrlr": 127, 00:12:40.431 "in_capsule_data_size": 4096, 00:12:40.431 "max_io_size": 131072, 00:12:40.431 "io_unit_size": 131072, 00:12:40.431 "max_aq_depth": 128, 00:12:40.431 "num_shared_buffers": 511, 00:12:40.431 "buf_cache_size": 4294967295, 00:12:40.431 "dif_insert_or_strip": false, 00:12:40.431 "zcopy": false, 00:12:40.431 "c2h_success": false, 00:12:40.431 "sock_priority": 0, 00:12:40.431 "abort_timeout_sec": 1, 00:12:40.431 "ack_timeout": 0, 00:12:40.431 "data_wr_pool_size": 0 00:12:40.431 } 00:12:40.431 }, 00:12:40.431 { 00:12:40.431 "method": "nvmf_create_subsystem", 00:12:40.431 "params": { 00:12:40.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.431 "allow_any_host": false, 00:12:40.432 "serial_number": "SPDK00000000000001", 00:12:40.432 "model_number": "SPDK bdev Controller", 00:12:40.432 "max_namespaces": 10, 00:12:40.432 "min_cntlid": 1, 00:12:40.432 "max_cntlid": 65519, 00:12:40.432 "ana_reporting": false 00:12:40.432 } 00:12:40.432 }, 00:12:40.432 { 00:12:40.432 "method": "nvmf_subsystem_add_host", 00:12:40.432 "params": { 00:12:40.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.432 "host": "nqn.2016-06.io.spdk:host1", 00:12:40.432 "psk": "key0" 00:12:40.432 } 00:12:40.432 }, 00:12:40.432 { 00:12:40.432 "method": "nvmf_subsystem_add_ns", 00:12:40.432 "params": { 00:12:40.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.432 "namespace": { 00:12:40.432 "nsid": 1, 00:12:40.432 "bdev_name": "malloc0", 00:12:40.432 "nguid": "95C4D154757B45418E01B3AFB7E1089B", 00:12:40.432 "uuid": "95c4d154-757b-4541-8e01-b3afb7e1089b", 00:12:40.432 "no_auto_visible": false 00:12:40.432 } 00:12:40.432 } 00:12:40.432 }, 00:12:40.432 { 00:12:40.432 "method": "nvmf_subsystem_add_listener", 00:12:40.432 "params": { 00:12:40.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.432 "listen_address": { 00:12:40.432 "trtype": "TCP", 00:12:40.432 "adrfam": "IPv4", 00:12:40.432 "traddr": "10.0.0.3", 00:12:40.432 "trsvcid": "4420" 00:12:40.432 }, 00:12:40.432 "secure_channel": true 00:12:40.432 } 00:12:40.432 } 00:12:40.432 ] 00:12:40.432 } 00:12:40.432 ] 00:12:40.432 }' 00:12:40.432 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:40.691 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:12:40.691 "subsystems": [ 00:12:40.691 { 00:12:40.691 "subsystem": "keyring", 00:12:40.691 "config": [ 00:12:40.691 { 00:12:40.691 "method": "keyring_file_add_key", 00:12:40.691 "params": { 00:12:40.691 "name": "key0", 00:12:40.691 "path": "/tmp/tmp.fCevzAXk6f" 00:12:40.691 } 00:12:40.691 } 00:12:40.691 ] 00:12:40.691 }, 00:12:40.691 { 00:12:40.691 "subsystem": "iobuf", 00:12:40.691 "config": [ 00:12:40.691 { 00:12:40.691 "method": "iobuf_set_options", 00:12:40.691 "params": { 00:12:40.691 "small_pool_count": 8192, 00:12:40.691 "large_pool_count": 1024, 00:12:40.691 "small_bufsize": 8192, 00:12:40.691 "large_bufsize": 135168, 00:12:40.691 "enable_numa": false 00:12:40.691 } 00:12:40.691 } 00:12:40.691 ] 00:12:40.691 }, 00:12:40.691 { 00:12:40.691 "subsystem": "sock", 00:12:40.691 "config": [ 00:12:40.691 { 00:12:40.691 "method": "sock_set_default_impl", 00:12:40.691 "params": { 00:12:40.691 "impl_name": "uring" 00:12:40.691 } 00:12:40.691 }, 00:12:40.691 { 00:12:40.691 "method": "sock_impl_set_options", 00:12:40.691 "params": { 00:12:40.691 "impl_name": "ssl", 00:12:40.691 "recv_buf_size": 4096, 00:12:40.691 "send_buf_size": 4096, 00:12:40.691 "enable_recv_pipe": true, 00:12:40.691 "enable_quickack": false, 00:12:40.691 "enable_placement_id": 0, 00:12:40.691 "enable_zerocopy_send_server": true, 00:12:40.691 "enable_zerocopy_send_client": false, 00:12:40.691 "zerocopy_threshold": 0, 00:12:40.691 "tls_version": 0, 00:12:40.691 "enable_ktls": false 00:12:40.691 } 00:12:40.691 }, 00:12:40.691 { 00:12:40.691 "method": "sock_impl_set_options", 00:12:40.691 "params": { 00:12:40.691 "impl_name": "posix", 00:12:40.691 "recv_buf_size": 2097152, 00:12:40.691 "send_buf_size": 2097152, 00:12:40.691 "enable_recv_pipe": true, 00:12:40.691 "enable_quickack": false, 00:12:40.691 "enable_placement_id": 0, 00:12:40.691 "enable_zerocopy_send_server": true, 00:12:40.691 "enable_zerocopy_send_client": false, 00:12:40.691 "zerocopy_threshold": 0, 00:12:40.691 "tls_version": 0, 00:12:40.691 "enable_ktls": false 00:12:40.691 } 00:12:40.691 }, 00:12:40.691 { 00:12:40.691 "method": "sock_impl_set_options", 00:12:40.691 "params": { 00:12:40.692 "impl_name": "uring", 00:12:40.692 "recv_buf_size": 2097152, 00:12:40.692 "send_buf_size": 2097152, 00:12:40.692 "enable_recv_pipe": true, 00:12:40.692 "enable_quickack": false, 00:12:40.692 "enable_placement_id": 0, 00:12:40.692 "enable_zerocopy_send_server": false, 00:12:40.692 "enable_zerocopy_send_client": false, 00:12:40.692 "zerocopy_threshold": 0, 00:12:40.692 "tls_version": 0, 00:12:40.692 "enable_ktls": false 00:12:40.692 } 00:12:40.692 } 00:12:40.692 ] 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "subsystem": "vmd", 00:12:40.692 "config": [] 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "subsystem": "accel", 00:12:40.692 "config": [ 00:12:40.692 { 00:12:40.692 "method": "accel_set_options", 00:12:40.692 "params": { 00:12:40.692 "small_cache_size": 128, 00:12:40.692 "large_cache_size": 16, 00:12:40.692 "task_count": 2048, 00:12:40.692 "sequence_count": 2048, 00:12:40.692 "buf_count": 2048 00:12:40.692 } 00:12:40.692 } 00:12:40.692 ] 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "subsystem": "bdev", 00:12:40.692 "config": [ 00:12:40.692 { 00:12:40.692 "method": "bdev_set_options", 00:12:40.692 "params": { 00:12:40.692 "bdev_io_pool_size": 65535, 00:12:40.692 "bdev_io_cache_size": 256, 00:12:40.692 "bdev_auto_examine": true, 00:12:40.692 "iobuf_small_cache_size": 128, 00:12:40.692 "iobuf_large_cache_size": 16 00:12:40.692 } 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "method": "bdev_raid_set_options", 00:12:40.692 "params": { 00:12:40.692 "process_window_size_kb": 1024, 00:12:40.692 "process_max_bandwidth_mb_sec": 0 00:12:40.692 } 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "method": "bdev_iscsi_set_options", 00:12:40.692 "params": { 00:12:40.692 "timeout_sec": 30 00:12:40.692 } 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "method": "bdev_nvme_set_options", 00:12:40.692 "params": { 00:12:40.692 "action_on_timeout": "none", 00:12:40.692 "timeout_us": 0, 00:12:40.692 "timeout_admin_us": 0, 00:12:40.692 "keep_alive_timeout_ms": 10000, 00:12:40.692 "arbitration_burst": 0, 00:12:40.692 "low_priority_weight": 0, 00:12:40.692 "medium_priority_weight": 0, 00:12:40.692 "high_priority_weight": 0, 00:12:40.692 "nvme_adminq_poll_period_us": 10000, 00:12:40.692 "nvme_ioq_poll_period_us": 0, 00:12:40.692 "io_queue_requests": 512, 00:12:40.692 "delay_cmd_submit": true, 00:12:40.692 "transport_retry_count": 4, 00:12:40.692 "bdev_retry_count": 3, 00:12:40.692 "transport_ack_timeout": 0, 00:12:40.692 "ctrlr_loss_timeout_sec": 0, 00:12:40.692 "reconnect_delay_sec": 0, 00:12:40.692 "fast_io_fail_timeout_sec": 0, 00:12:40.692 "disable_auto_failback": false, 00:12:40.692 "generate_uuids": false, 00:12:40.692 "transport_tos": 0, 00:12:40.692 "nvme_error_stat": false, 00:12:40.692 "rdma_srq_size": 0, 00:12:40.692 "io_path_stat": false, 00:12:40.692 "allow_accel_sequence": false, 00:12:40.692 "rdma_max_cq_size": 0, 00:12:40.692 "rdma_cm_event_timeout_ms": 0, 00:12:40.692 "dhchap_digests": [ 00:12:40.692 "sha256", 00:12:40.692 "sha384", 00:12:40.692 "sha512" 00:12:40.692 ], 00:12:40.692 "dhchap_dhgroups": [ 00:12:40.692 "null", 00:12:40.692 "ffdhe2048", 00:12:40.692 "ffdhe3072", 00:12:40.692 "ffdhe4096", 00:12:40.692 "ffdhe6144", 00:12:40.692 "ffdhe8192" 00:12:40.692 ] 00:12:40.692 } 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "method": "bdev_nvme_attach_controller", 00:12:40.692 "params": { 00:12:40.692 "name": "TLSTEST", 00:12:40.692 "trtype": "TCP", 00:12:40.692 "adrfam": "IPv4", 00:12:40.692 "traddr": "10.0.0.3", 00:12:40.692 "trsvcid": "4420", 00:12:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.692 "prchk_reftag": false, 00:12:40.692 "prchk_guard": false, 00:12:40.692 "ctrlr_loss_timeout_sec": 0, 00:12:40.692 "reconnect_delay_sec": 0, 00:12:40.692 "fast_io_fail_timeout_sec": 0, 00:12:40.692 "psk": "key0", 00:12:40.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:40.692 "hdgst": false, 00:12:40.692 "ddgst": false, 00:12:40.692 "multipath": "multipath" 00:12:40.692 } 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "method": "bdev_nvme_set_hotplug", 00:12:40.692 "params": { 00:12:40.692 "period_us": 100000, 00:12:40.692 "enable": false 00:12:40.692 } 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "method": "bdev_wait_for_examine" 00:12:40.692 } 00:12:40.692 ] 00:12:40.692 }, 00:12:40.692 { 00:12:40.692 "subsystem": "nbd", 00:12:40.692 "config": [] 00:12:40.692 } 00:12:40.692 ] 00:12:40.692 }' 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71482 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71482 ']' 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71482 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71482 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:40.692 killing process with pid 71482 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71482' 00:12:40.692 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.692 00:12:40.692 Latency(us) 00:12:40.692 [2024-11-15T12:47:49.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.692 [2024-11-15T12:47:49.362Z] =================================================================================================================== 00:12:40.692 [2024-11-15T12:47:49.362Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71482 00:12:40.692 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71482 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71439 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71439 ']' 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71439 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71439 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:40.952 killing process with pid 71439 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71439' 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71439 00:12:40.952 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71439 00:12:41.212 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:41.212 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.212 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.212 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:12:41.212 "subsystems": [ 00:12:41.212 { 00:12:41.212 "subsystem": "keyring", 00:12:41.212 "config": [ 00:12:41.212 { 00:12:41.212 "method": "keyring_file_add_key", 00:12:41.212 "params": { 00:12:41.212 "name": "key0", 00:12:41.212 "path": "/tmp/tmp.fCevzAXk6f" 00:12:41.212 } 00:12:41.212 } 00:12:41.212 ] 00:12:41.212 }, 00:12:41.212 { 00:12:41.212 "subsystem": "iobuf", 00:12:41.212 "config": [ 00:12:41.212 { 00:12:41.212 "method": "iobuf_set_options", 00:12:41.212 "params": { 00:12:41.212 "small_pool_count": 8192, 00:12:41.212 "large_pool_count": 1024, 00:12:41.212 "small_bufsize": 8192, 00:12:41.212 "large_bufsize": 135168, 00:12:41.212 "enable_numa": false 00:12:41.212 } 00:12:41.212 } 00:12:41.212 ] 00:12:41.212 }, 00:12:41.212 { 00:12:41.212 "subsystem": "sock", 00:12:41.212 "config": [ 00:12:41.212 { 00:12:41.212 "method": "sock_set_default_impl", 00:12:41.212 "params": { 00:12:41.212 "impl_name": "uring" 00:12:41.212 } 00:12:41.212 }, 00:12:41.212 { 00:12:41.212 "method": "sock_impl_set_options", 00:12:41.212 "params": { 00:12:41.212 "impl_name": "ssl", 00:12:41.212 "recv_buf_size": 4096, 00:12:41.212 "send_buf_size": 4096, 00:12:41.212 "enable_recv_pipe": true, 00:12:41.212 "enable_quickack": false, 00:12:41.213 "enable_placement_id": 0, 00:12:41.213 "enable_zerocopy_send_server": true, 00:12:41.213 "enable_zerocopy_send_client": false, 00:12:41.213 "zerocopy_threshold": 0, 00:12:41.213 "tls_version": 0, 00:12:41.213 "enable_ktls": false 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "sock_impl_set_options", 00:12:41.213 "params": { 00:12:41.213 "impl_name": "posix", 00:12:41.213 "recv_buf_size": 2097152, 00:12:41.213 "send_buf_size": 2097152, 00:12:41.213 "enable_recv_pipe": true, 00:12:41.213 "enable_quickack": false, 00:12:41.213 "enable_placement_id": 0, 00:12:41.213 "enable_zerocopy_send_server": true, 00:12:41.213 "enable_zerocopy_send_client": false, 00:12:41.213 "zerocopy_threshold": 0, 00:12:41.213 "tls_version": 0, 00:12:41.213 "enable_ktls": false 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "sock_impl_set_options", 00:12:41.213 "params": { 00:12:41.213 "impl_name": "uring", 00:12:41.213 "recv_buf_size": 2097152, 00:12:41.213 "send_buf_size": 2097152, 00:12:41.213 "enable_recv_pipe": true, 00:12:41.213 "enable_quickack": false, 00:12:41.213 "enable_placement_id": 0, 00:12:41.213 "enable_zerocopy_send_server": false, 00:12:41.213 "enable_zerocopy_send_client": false, 00:12:41.213 "zerocopy_threshold": 0, 00:12:41.213 "tls_version": 0, 00:12:41.213 "enable_ktls": false 00:12:41.213 } 00:12:41.213 } 00:12:41.213 ] 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "subsystem": "vmd", 00:12:41.213 "config": [] 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "subsystem": "accel", 00:12:41.213 "config": [ 00:12:41.213 { 00:12:41.213 "method": "accel_set_options", 00:12:41.213 "params": { 00:12:41.213 "small_cache_size": 128, 00:12:41.213 "large_cache_size": 16, 00:12:41.213 "task_count": 2048, 00:12:41.213 "sequence_count": 2048, 00:12:41.213 "buf_count": 2048 00:12:41.213 } 00:12:41.213 } 00:12:41.213 ] 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "subsystem": "bdev", 00:12:41.213 "config": [ 00:12:41.213 { 00:12:41.213 "method": "bdev_set_options", 00:12:41.213 "params": { 00:12:41.213 "bdev_io_pool_size": 65535, 00:12:41.213 "bdev_io_cache_size": 256, 00:12:41.213 "bdev_auto_examine": true, 00:12:41.213 "iobuf_small_cache_size": 128, 00:12:41.213 "iobuf_large_cache_size": 16 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "bdev_raid_set_options", 00:12:41.213 "params": { 00:12:41.213 "process_window_size_kb": 1024, 00:12:41.213 "process_max_bandwidth_mb_sec": 0 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "bdev_iscsi_set_options", 00:12:41.213 "params": { 00:12:41.213 "timeout_sec": 30 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "bdev_nvme_set_options", 00:12:41.213 "params": { 00:12:41.213 "action_on_timeout": "none", 00:12:41.213 "timeout_us": 0, 00:12:41.213 "timeout_admin_us": 0, 00:12:41.213 "keep_alive_timeout_ms": 10000, 00:12:41.213 "arbitration_burst": 0, 00:12:41.213 "low_priority_weight": 0, 00:12:41.213 "medium_priority_weight": 0, 00:12:41.213 "high_priority_weight": 0, 00:12:41.213 "nvme_adminq_poll_period_us": 10000, 00:12:41.213 "nvme_ioq_poll_period_us": 0, 00:12:41.213 "io_queue_requests": 0, 00:12:41.213 "delay_cmd_submit": true, 00:12:41.213 "transport_retry_count": 4, 00:12:41.213 "bdev_retry_count": 3, 00:12:41.213 "transport_ack_timeout": 0, 00:12:41.213 "ctrlr_loss_timeout_sec": 0, 00:12:41.213 "reconnect_delay_sec": 0, 00:12:41.213 "fast_io_fail_timeout_sec": 0, 00:12:41.213 "disable_auto_failback": false, 00:12:41.213 "generate_uuids": false, 00:12:41.213 "transport_tos": 0, 00:12:41.213 "nvme_error_stat": false, 00:12:41.213 "rdma_srq_size": 0, 00:12:41.213 "io_path_stat": false, 00:12:41.213 "allow_accel_sequence": false, 00:12:41.213 "rdma_max_cq_size": 0, 00:12:41.213 "rdma_cm_event_timeout_ms": 0, 00:12:41.213 "dhchap_digests": [ 00:12:41.213 "sha256", 00:12:41.213 "sha384", 00:12:41.213 "sha512" 00:12:41.213 ], 00:12:41.213 "dhchap_dhgroups": [ 00:12:41.213 "null", 00:12:41.213 "ffdhe2048", 00:12:41.213 "ffdhe3072", 00:12:41.213 "ffdhe4096", 00:12:41.213 "ffdhe6144", 00:12:41.213 "ffdhe8192" 00:12:41.213 ] 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "bdev_nvme_set_hotplug", 00:12:41.213 "params": { 00:12:41.213 "period_us": 100000, 00:12:41.213 "enable": false 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "bdev_malloc_create", 00:12:41.213 "params": { 00:12:41.213 "name": "malloc0", 00:12:41.213 "num_blocks": 8192, 00:12:41.213 "block_size": 4096, 00:12:41.213 "physical_block_size": 4096, 00:12:41.213 "uuid": "95c4d154-757b-4541-8e01-b3afb7e1089b", 00:12:41.213 "optimal_io_boundary": 0, 00:12:41.213 "md_size": 0, 00:12:41.213 "dif_type": 0, 00:12:41.213 "dif_is_head_of_md": false, 00:12:41.213 "dif_pi_format": 0 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "bdev_wait_for_examine" 00:12:41.213 } 00:12:41.213 ] 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "subsystem": "nbd", 00:12:41.213 "config": [] 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "subsystem": "scheduler", 00:12:41.213 "config": [ 00:12:41.213 { 00:12:41.213 "method": "framework_set_scheduler", 00:12:41.213 "params": { 00:12:41.213 "name": "static" 00:12:41.213 } 00:12:41.213 } 00:12:41.213 ] 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "subsystem": "nvmf", 00:12:41.213 "config": [ 00:12:41.213 { 00:12:41.213 "method": "nvmf_set_config", 00:12:41.213 "params": { 00:12:41.213 "discovery_filter": "match_any", 00:12:41.213 "admin_cmd_passthru": { 00:12:41.213 "identify_ctrlr": false 00:12:41.213 }, 00:12:41.213 "dhchap_digests": [ 00:12:41.213 "sha256", 00:12:41.213 "sha384", 00:12:41.213 "sha512" 00:12:41.213 ], 00:12:41.213 "dhchap_dhgroups": [ 00:12:41.213 "null", 00:12:41.213 "ffdhe2048", 00:12:41.213 "ffdhe3072", 00:12:41.213 "ffdhe4096", 00:12:41.213 "ffdhe6144", 00:12:41.213 "ffdhe8192" 00:12:41.213 ] 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "nvmf_set_max_subsystems", 00:12:41.213 "params": { 00:12:41.213 "max_subsystems": 1024 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "nvmf_set_crdt", 00:12:41.213 "params": { 00:12:41.213 "crdt1": 0, 00:12:41.213 "crdt2": 0, 00:12:41.213 "crdt3": 0 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "nvmf_create_transport", 00:12:41.213 "params": { 00:12:41.213 "trtype": "TCP", 00:12:41.213 "max_queue_depth": 128, 00:12:41.213 "max_io_qpairs_per_ctrlr": 127, 00:12:41.213 "in_capsule_data_size": 4096, 00:12:41.213 "max_io_size": 131072, 00:12:41.213 "io_unit_size": 131072, 00:12:41.213 "max_aq_depth": 128, 00:12:41.213 "num_shared_buffers": 511, 00:12:41.213 "buf_cache_size": 4294967295, 00:12:41.213 "dif_insert_or_strip": false, 00:12:41.213 "zcopy": false, 00:12:41.213 "c2h_success": false, 00:12:41.213 "sock_priority": 0, 00:12:41.213 "abort_timeout_sec": 1, 00:12:41.213 "ack_timeout": 0, 00:12:41.213 "data_wr_pool_size": 0 00:12:41.213 } 00:12:41.213 }, 00:12:41.213 { 00:12:41.213 "method": "nvmf_create_subsystem", 00:12:41.213 "params": { 00:12:41.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.213 "allow_any_host": false, 00:12:41.213 "serial_number": "SPDK00000000000001", 00:12:41.213 "model_number": "SPDK bdev Controller", 00:12:41.213 "max_namespaces": 10, 00:12:41.213 "min_cntlid": 1, 00:12:41.213 "max_cntlid": 65519, 00:12:41.214 "ana_reporting": false 00:12:41.214 } 00:12:41.214 }, 00:12:41.214 { 00:12:41.214 "method": "nvmf_subsystem_add_host", 00:12:41.214 "params": { 00:12:41.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.214 "host": "nqn.2016-06.io.spdk:host1", 00:12:41.214 "psk": "key0" 00:12:41.214 } 00:12:41.214 }, 00:12:41.214 { 00:12:41.214 "method": "nvmf_subsystem_add_ns", 00:12:41.214 "params": { 00:12:41.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.214 "namespace": { 00:12:41.214 "nsid": 1, 00:12:41.214 "bdev_name": "malloc0", 00:12:41.214 "nguid": "95C4D154757B45418E01B3AFB7E1089B", 00:12:41.214 "uuid": "95c4d154-757b-4541-8e01-b3afb7e1089b", 00:12:41.214 "no_auto_visible": false 00:12:41.214 } 00:12:41.214 } 00:12:41.214 }, 00:12:41.214 { 00:12:41.214 "method": "nvmf_subsystem_add_listener", 00:12:41.214 "params": { 00:12:41.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.214 "listen_address": { 00:12:41.214 "trtype": "TCP", 00:12:41.214 "adrfam": "IPv4", 00:12:41.214 "traddr": "10.0.0.3", 00:12:41.214 "trsvcid": "4420" 00:12:41.214 }, 00:12:41.214 "secure_channel": true 00:12:41.214 } 00:12:41.214 } 00:12:41.214 ] 00:12:41.214 } 00:12:41.214 ] 00:12:41.214 }' 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71531 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71531 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71531 ']' 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.214 12:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.214 [2024-11-15 12:47:49.697830] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:41.214 [2024-11-15 12:47:49.697918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.214 [2024-11-15 12:47:49.839364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.214 [2024-11-15 12:47:49.865368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.214 [2024-11-15 12:47:49.865435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.214 [2024-11-15 12:47:49.865461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.214 [2024-11-15 12:47:49.865468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.214 [2024-11-15 12:47:49.865474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.214 [2024-11-15 12:47:49.865871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.473 [2024-11-15 12:47:50.005388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.473 [2024-11-15 12:47:50.061032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.473 [2024-11-15 12:47:50.092981] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:41.473 [2024-11-15 12:47:50.093171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:42.041 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.041 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:42.041 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.041 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.041 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71569 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71569 /var/tmp/bdevperf.sock 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71569 ']' 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.301 12:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:12:42.301 "subsystems": [ 00:12:42.301 { 00:12:42.301 "subsystem": "keyring", 00:12:42.301 "config": [ 00:12:42.301 { 00:12:42.301 "method": "keyring_file_add_key", 00:12:42.301 "params": { 00:12:42.301 "name": "key0", 00:12:42.301 "path": "/tmp/tmp.fCevzAXk6f" 00:12:42.301 } 00:12:42.301 } 00:12:42.301 ] 00:12:42.301 }, 00:12:42.301 { 00:12:42.301 "subsystem": "iobuf", 00:12:42.301 "config": [ 00:12:42.301 { 00:12:42.301 "method": "iobuf_set_options", 00:12:42.301 "params": { 00:12:42.301 "small_pool_count": 8192, 00:12:42.301 "large_pool_count": 1024, 00:12:42.301 "small_bufsize": 8192, 00:12:42.301 "large_bufsize": 135168, 00:12:42.301 "enable_numa": false 00:12:42.301 } 00:12:42.301 } 00:12:42.301 ] 00:12:42.301 }, 00:12:42.301 { 00:12:42.301 "subsystem": "sock", 00:12:42.301 "config": [ 00:12:42.301 { 00:12:42.301 "method": "sock_set_default_impl", 00:12:42.301 "params": { 00:12:42.301 "impl_name": "uring" 00:12:42.301 } 00:12:42.301 }, 00:12:42.301 { 00:12:42.301 "method": "sock_impl_set_options", 00:12:42.301 "params": { 00:12:42.301 "impl_name": "ssl", 00:12:42.301 "recv_buf_size": 4096, 00:12:42.301 "send_buf_size": 4096, 00:12:42.301 "enable_recv_pipe": true, 00:12:42.301 "enable_quickack": false, 00:12:42.301 "enable_placement_id": 0, 00:12:42.301 "enable_zerocopy_send_server": true, 00:12:42.301 "enable_zerocopy_send_client": false, 00:12:42.302 "zerocopy_threshold": 0, 00:12:42.302 "tls_version": 0, 00:12:42.302 "enable_ktls": false 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "sock_impl_set_options", 00:12:42.302 "params": { 00:12:42.302 "impl_name": "posix", 00:12:42.302 "recv_buf_size": 2097152, 00:12:42.302 "send_buf_size": 2097152, 00:12:42.302 "enable_recv_pipe": true, 00:12:42.302 "enable_quickack": false, 00:12:42.302 "enable_placement_id": 0, 00:12:42.302 "enable_zerocopy_send_server": true, 00:12:42.302 "enable_zerocopy_send_client": false, 00:12:42.302 "zerocopy_threshold": 0, 00:12:42.302 "tls_version": 0, 00:12:42.302 "enable_ktls": false 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "sock_impl_set_options", 00:12:42.302 "params": { 00:12:42.302 "impl_name": "uring", 00:12:42.302 "recv_buf_size": 2097152, 00:12:42.302 "send_buf_size": 2097152, 00:12:42.302 "enable_recv_pipe": true, 00:12:42.302 "enable_quickack": false, 00:12:42.302 "enable_placement_id": 0, 00:12:42.302 "enable_zerocopy_send_server": false, 00:12:42.302 "enable_zerocopy_send_client": false, 00:12:42.302 "zerocopy_threshold": 0, 00:12:42.302 "tls_version": 0, 00:12:42.302 "enable_ktls": false 00:12:42.302 } 00:12:42.302 } 00:12:42.302 ] 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "subsystem": "vmd", 00:12:42.302 "config": [] 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "subsystem": "accel", 00:12:42.302 "config": [ 00:12:42.302 { 00:12:42.302 "method": "accel_set_options", 00:12:42.302 "params": { 00:12:42.302 "small_cache_size": 128, 00:12:42.302 "large_cache_size": 16, 00:12:42.302 "task_count": 2048, 00:12:42.302 "sequence_count": 2048, 00:12:42.302 "buf_count": 2048 00:12:42.302 } 00:12:42.302 } 00:12:42.302 ] 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "subsystem": "bdev", 00:12:42.302 "config": [ 00:12:42.302 { 00:12:42.302 "method": "bdev_set_options", 00:12:42.302 "params": { 00:12:42.302 "bdev_io_pool_size": 65535, 00:12:42.302 "bdev_io_cache_size": 256, 00:12:42.302 "bdev_auto_examine": true, 00:12:42.302 "iobuf_small_cache_size": 128, 00:12:42.302 "iobuf_large_cache_size": 16 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "bdev_raid_set_options", 00:12:42.302 "params": { 00:12:42.302 "process_window_size_kb": 1024, 00:12:42.302 "process_max_bandwidth_mb_sec": 0 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "bdev_iscsi_set_options", 00:12:42.302 "params": { 00:12:42.302 "timeout_sec": 30 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "bdev_nvme_set_options", 00:12:42.302 "params": { 00:12:42.302 "action_on_timeout": "none", 00:12:42.302 "timeout_us": 0, 00:12:42.302 "timeout_admin_us": 0, 00:12:42.302 "keep_alive_timeout_ms": 10000, 00:12:42.302 "arbitration_burst": 0, 00:12:42.302 "low_priority_weight": 0, 00:12:42.302 "medium_priority_weight": 0, 00:12:42.302 "high_priority_weight": 0, 00:12:42.302 "nvme_adminq_poll_period_us": 10000, 00:12:42.302 "nvme_ioq_poll_period_us": 0, 00:12:42.302 "io_queue_requests": 512, 00:12:42.302 "delay_cmd_submit": true, 00:12:42.302 "transport_retry_count": 4, 00:12:42.302 "bdev_retry_count": 3, 00:12:42.302 "transport_ack_timeout": 0, 00:12:42.302 "ctrlr_loss_timeout_sec": 0, 00:12:42.302 "reconnect_delay_sec": 0, 00:12:42.302 "fast_io_fail_timeout_sec": 0, 00:12:42.302 "disable_auto_failback": false, 00:12:42.302 "generate_uuids": false, 00:12:42.302 "transport_tos": 0, 00:12:42.302 "nvme_error_stat": false, 00:12:42.302 "rdma_srq_size": 0, 00:12:42.302 "io_path_stat": false, 00:12:42.302 "allow_accel_sequence": false, 00:12:42.302 "rdma_max_cq_size": 0, 00:12:42.302 "rdma_cm_event_timeout_ms": 0, 00:12:42.302 "dhchap_digests": [ 00:12:42.302 "sha256", 00:12:42.302 "sha384", 00:12:42.302 "sha512" 00:12:42.302 ], 00:12:42.302 "dhchap_dhgroups": [ 00:12:42.302 "null", 00:12:42.302 "ffdhe2048", 00:12:42.302 "ffdhe3072", 00:12:42.302 "ffdhe4096", 00:12:42.302 "ffdhe6144", 00:12:42.302 "ffdhe8192" 00:12:42.302 ] 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "bdev_nvme_attach_controller", 00:12:42.302 "params": { 00:12:42.302 "name": "TLSTEST", 00:12:42.302 "trtype": "TCP", 00:12:42.302 "adrfam": "IPv4", 00:12:42.302 "traddr": "10.0.0.3", 00:12:42.302 "trsvcid": "4420", 00:12:42.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.302 "prchk_reftag": false, 00:12:42.302 "prchk_guard": false, 00:12:42.302 "ctrlr_loss_timeout_sec": 0, 00:12:42.302 "reconnect_delay_sec": 0, 00:12:42.302 "fast_io_fail_timeout_sec": 0, 00:12:42.302 "psk": "key0", 00:12:42.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.302 "hdgst": false, 00:12:42.302 "ddgst": false, 00:12:42.302 "multipath": "multipath" 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "bdev_nvme_set_hotplug", 00:12:42.302 "params": { 00:12:42.302 "period_us": 100000, 00:12:42.302 "enable": false 00:12:42.302 } 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "method": "bdev_wait_for_examine" 00:12:42.302 } 00:12:42.302 ] 00:12:42.302 }, 00:12:42.302 { 00:12:42.302 "subsystem": "nbd", 00:12:42.302 "config": [] 00:12:42.302 } 00:12:42.302 ] 00:12:42.302 }' 00:12:42.302 [2024-11-15 12:47:50.789512] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:42.302 [2024-11-15 12:47:50.789655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71569 ] 00:12:42.302 [2024-11-15 12:47:50.939745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.561 [2024-11-15 12:47:50.979436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.561 [2024-11-15 12:47:51.094684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.561 [2024-11-15 12:47:51.129697] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:43.129 12:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.129 12:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:43.129 12:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:43.387 Running I/O for 10 seconds... 00:12:45.261 4950.00 IOPS, 19.34 MiB/s [2024-11-15T12:47:54.867Z] 4962.50 IOPS, 19.38 MiB/s [2024-11-15T12:47:56.246Z] 4968.00 IOPS, 19.41 MiB/s [2024-11-15T12:47:57.183Z] 4968.00 IOPS, 19.41 MiB/s [2024-11-15T12:47:58.120Z] 4962.40 IOPS, 19.38 MiB/s [2024-11-15T12:47:59.057Z] 4956.33 IOPS, 19.36 MiB/s [2024-11-15T12:47:59.993Z] 4957.43 IOPS, 19.36 MiB/s [2024-11-15T12:48:00.931Z] 4956.12 IOPS, 19.36 MiB/s [2024-11-15T12:48:01.867Z] 4934.00 IOPS, 19.27 MiB/s [2024-11-15T12:48:01.867Z] 4916.20 IOPS, 19.20 MiB/s 00:12:53.197 Latency(us) 00:12:53.197 [2024-11-15T12:48:01.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.197 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:53.197 Verification LBA range: start 0x0 length 0x2000 00:12:53.197 TLSTESTn1 : 10.01 4922.39 19.23 0.00 0.00 25960.48 4200.26 21805.61 00:12:53.197 [2024-11-15T12:48:01.867Z] =================================================================================================================== 00:12:53.197 [2024-11-15T12:48:01.867Z] Total : 4922.39 19.23 0.00 0.00 25960.48 4200.26 21805.61 00:12:53.197 { 00:12:53.197 "results": [ 00:12:53.197 { 00:12:53.197 "job": "TLSTESTn1", 00:12:53.197 "core_mask": "0x4", 00:12:53.197 "workload": "verify", 00:12:53.197 "status": "finished", 00:12:53.197 "verify_range": { 00:12:53.197 "start": 0, 00:12:53.197 "length": 8192 00:12:53.197 }, 00:12:53.197 "queue_depth": 128, 00:12:53.197 "io_size": 4096, 00:12:53.197 "runtime": 10.013225, 00:12:53.197 "iops": 4922.390139041118, 00:12:53.197 "mibps": 19.22808648062937, 00:12:53.197 "io_failed": 0, 00:12:53.197 "io_timeout": 0, 00:12:53.197 "avg_latency_us": 25960.48409635932, 00:12:53.197 "min_latency_us": 4200.261818181818, 00:12:53.197 "max_latency_us": 21805.614545454544 00:12:53.197 } 00:12:53.197 ], 00:12:53.197 "core_count": 1 00:12:53.197 } 00:12:53.457 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:53.457 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71569 00:12:53.457 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71569 ']' 00:12:53.457 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71569 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71569 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:53.458 killing process with pid 71569 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71569' 00:12:53.458 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.458 00:12:53.458 Latency(us) 00:12:53.458 [2024-11-15T12:48:02.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.458 [2024-11-15T12:48:02.128Z] =================================================================================================================== 00:12:53.458 [2024-11-15T12:48:02.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71569 00:12:53.458 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71569 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71531 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71531 ']' 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71531 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71531 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:53.458 killing process with pid 71531 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71531' 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71531 00:12:53.458 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71531 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71702 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71702 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71702 ']' 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.717 12:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:53.717 [2024-11-15 12:48:02.265961] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:53.717 [2024-11-15 12:48:02.266097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.976 [2024-11-15 12:48:02.408746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.976 [2024-11-15 12:48:02.437467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.976 [2024-11-15 12:48:02.437545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.976 [2024-11-15 12:48:02.437571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.976 [2024-11-15 12:48:02.437578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.976 [2024-11-15 12:48:02.437584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.977 [2024-11-15 12:48:02.437904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.977 [2024-11-15 12:48:02.466651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.fCevzAXk6f 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fCevzAXk6f 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:54.914 [2024-11-15 12:48:03.552288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.914 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:55.173 12:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:55.432 [2024-11-15 12:48:04.028360] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:55.432 [2024-11-15 12:48:04.028578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:55.432 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:55.691 malloc0 00:12:55.691 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:55.950 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:56.210 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=71752 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 71752 /var/tmp/bdevperf.sock 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71752 ']' 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.469 12:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 [2024-11-15 12:48:05.020207] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:56.469 [2024-11-15 12:48:05.020305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71752 ] 00:12:56.729 [2024-11-15 12:48:05.168162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.729 [2024-11-15 12:48:05.206258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.729 [2024-11-15 12:48:05.239051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:57.297 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.297 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:57.297 12:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:12:57.556 12:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:57.815 [2024-11-15 12:48:06.351612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:57.815 nvme0n1 00:12:57.815 12:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:58.074 Running I/O for 1 seconds... 00:12:59.011 4608.00 IOPS, 18.00 MiB/s 00:12:59.011 Latency(us) 00:12:59.011 [2024-11-15T12:48:07.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.011 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.011 Verification LBA range: start 0x0 length 0x2000 00:12:59.011 nvme0n1 : 1.02 4642.37 18.13 0.00 0.00 27307.41 8102.63 18945.86 00:12:59.011 [2024-11-15T12:48:07.681Z] =================================================================================================================== 00:12:59.011 [2024-11-15T12:48:07.681Z] Total : 4642.37 18.13 0.00 0.00 27307.41 8102.63 18945.86 00:12:59.011 { 00:12:59.011 "results": [ 00:12:59.011 { 00:12:59.012 "job": "nvme0n1", 00:12:59.012 "core_mask": "0x2", 00:12:59.012 "workload": "verify", 00:12:59.012 "status": "finished", 00:12:59.012 "verify_range": { 00:12:59.012 "start": 0, 00:12:59.012 "length": 8192 00:12:59.012 }, 00:12:59.012 "queue_depth": 128, 00:12:59.012 "io_size": 4096, 00:12:59.012 "runtime": 1.020168, 00:12:59.012 "iops": 4642.3726288219195, 00:12:59.012 "mibps": 18.134268081335623, 00:12:59.012 "io_failed": 0, 00:12:59.012 "io_timeout": 0, 00:12:59.012 "avg_latency_us": 27307.406781326783, 00:12:59.012 "min_latency_us": 8102.632727272728, 00:12:59.012 "max_latency_us": 18945.861818181816 00:12:59.012 } 00:12:59.012 ], 00:12:59.012 "core_count": 1 00:12:59.012 } 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 71752 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71752 ']' 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71752 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71752 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:59.012 killing process with pid 71752 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71752' 00:12:59.012 Received shutdown signal, test time was about 1.000000 seconds 00:12:59.012 00:12:59.012 Latency(us) 00:12:59.012 [2024-11-15T12:48:07.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.012 [2024-11-15T12:48:07.682Z] =================================================================================================================== 00:12:59.012 [2024-11-15T12:48:07.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71752 00:12:59.012 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71752 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 71702 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71702 ']' 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71702 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71702 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.271 killing process with pid 71702 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71702' 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71702 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71702 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71803 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71803 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71803 ']' 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.271 12:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.531 [2024-11-15 12:48:07.983775] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:59.531 [2024-11-15 12:48:07.983870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.531 [2024-11-15 12:48:08.121202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.531 [2024-11-15 12:48:08.150686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.531 [2024-11-15 12:48:08.150749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.531 [2024-11-15 12:48:08.150759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.531 [2024-11-15 12:48:08.150766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.531 [2024-11-15 12:48:08.150772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.531 [2024-11-15 12:48:08.151063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.531 [2024-11-15 12:48:08.177363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.790 [2024-11-15 12:48:08.269712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.790 malloc0 00:12:59.790 [2024-11-15 12:48:08.295291] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:59.790 [2024-11-15 12:48:08.295460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=71828 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 71828 /var/tmp/bdevperf.sock 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71828 ']' 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.790 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.790 [2024-11-15 12:48:08.369247] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:12:59.790 [2024-11-15 12:48:08.369325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71828 ] 00:13:00.049 [2024-11-15 12:48:08.509859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.049 [2024-11-15 12:48:08.537837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.049 [2024-11-15 12:48:08.564622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.049 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.049 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:00.049 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fCevzAXk6f 00:13:00.309 12:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:00.567 [2024-11-15 12:48:09.057403] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:00.567 nvme0n1 00:13:00.567 12:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:00.826 Running I/O for 1 seconds... 00:13:01.763 4749.00 IOPS, 18.55 MiB/s 00:13:01.763 Latency(us) 00:13:01.763 [2024-11-15T12:48:10.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.763 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:01.763 Verification LBA range: start 0x0 length 0x2000 00:13:01.763 nvme0n1 : 1.02 4805.74 18.77 0.00 0.00 26367.85 1139.43 17158.52 00:13:01.763 [2024-11-15T12:48:10.433Z] =================================================================================================================== 00:13:01.763 [2024-11-15T12:48:10.433Z] Total : 4805.74 18.77 0.00 0.00 26367.85 1139.43 17158.52 00:13:01.763 { 00:13:01.763 "results": [ 00:13:01.763 { 00:13:01.763 "job": "nvme0n1", 00:13:01.763 "core_mask": "0x2", 00:13:01.763 "workload": "verify", 00:13:01.763 "status": "finished", 00:13:01.763 "verify_range": { 00:13:01.763 "start": 0, 00:13:01.763 "length": 8192 00:13:01.763 }, 00:13:01.763 "queue_depth": 128, 00:13:01.763 "io_size": 4096, 00:13:01.763 "runtime": 1.015036, 00:13:01.763 "iops": 4805.740880126419, 00:13:01.763 "mibps": 18.772425312993825, 00:13:01.763 "io_failed": 0, 00:13:01.763 "io_timeout": 0, 00:13:01.763 "avg_latency_us": 26367.85381788363, 00:13:01.763 "min_latency_us": 1139.4327272727273, 00:13:01.763 "max_latency_us": 17158.516363636365 00:13:01.763 } 00:13:01.763 ], 00:13:01.763 "core_count": 1 00:13:01.763 } 00:13:01.763 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:13:01.763 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.763 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.023 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.023 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:13:02.023 "subsystems": [ 00:13:02.023 { 00:13:02.023 "subsystem": "keyring", 00:13:02.023 "config": [ 00:13:02.023 { 00:13:02.023 "method": "keyring_file_add_key", 00:13:02.023 "params": { 00:13:02.023 "name": "key0", 00:13:02.023 "path": "/tmp/tmp.fCevzAXk6f" 00:13:02.023 } 00:13:02.023 } 00:13:02.023 ] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "iobuf", 00:13:02.023 "config": [ 00:13:02.023 { 00:13:02.023 "method": "iobuf_set_options", 00:13:02.023 "params": { 00:13:02.023 "small_pool_count": 8192, 00:13:02.023 "large_pool_count": 1024, 00:13:02.023 "small_bufsize": 8192, 00:13:02.023 "large_bufsize": 135168, 00:13:02.023 "enable_numa": false 00:13:02.023 } 00:13:02.023 } 00:13:02.023 ] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "sock", 00:13:02.023 "config": [ 00:13:02.023 { 00:13:02.023 "method": "sock_set_default_impl", 00:13:02.023 "params": { 00:13:02.023 "impl_name": "uring" 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "sock_impl_set_options", 00:13:02.023 "params": { 00:13:02.023 "impl_name": "ssl", 00:13:02.023 "recv_buf_size": 4096, 00:13:02.023 "send_buf_size": 4096, 00:13:02.023 "enable_recv_pipe": true, 00:13:02.023 "enable_quickack": false, 00:13:02.023 "enable_placement_id": 0, 00:13:02.023 "enable_zerocopy_send_server": true, 00:13:02.023 "enable_zerocopy_send_client": false, 00:13:02.023 "zerocopy_threshold": 0, 00:13:02.023 "tls_version": 0, 00:13:02.023 "enable_ktls": false 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "sock_impl_set_options", 00:13:02.023 "params": { 00:13:02.023 "impl_name": "posix", 00:13:02.023 "recv_buf_size": 2097152, 00:13:02.023 "send_buf_size": 2097152, 00:13:02.023 "enable_recv_pipe": true, 00:13:02.023 "enable_quickack": false, 00:13:02.023 "enable_placement_id": 0, 00:13:02.023 "enable_zerocopy_send_server": true, 00:13:02.023 "enable_zerocopy_send_client": false, 00:13:02.023 "zerocopy_threshold": 0, 00:13:02.023 "tls_version": 0, 00:13:02.023 "enable_ktls": false 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "sock_impl_set_options", 00:13:02.023 "params": { 00:13:02.023 "impl_name": "uring", 00:13:02.023 "recv_buf_size": 2097152, 00:13:02.023 "send_buf_size": 2097152, 00:13:02.023 "enable_recv_pipe": true, 00:13:02.023 "enable_quickack": false, 00:13:02.023 "enable_placement_id": 0, 00:13:02.023 "enable_zerocopy_send_server": false, 00:13:02.023 "enable_zerocopy_send_client": false, 00:13:02.023 "zerocopy_threshold": 0, 00:13:02.023 "tls_version": 0, 00:13:02.023 "enable_ktls": false 00:13:02.023 } 00:13:02.023 } 00:13:02.023 ] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "vmd", 00:13:02.023 "config": [] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "accel", 00:13:02.023 "config": [ 00:13:02.023 { 00:13:02.023 "method": "accel_set_options", 00:13:02.023 "params": { 00:13:02.023 "small_cache_size": 128, 00:13:02.023 "large_cache_size": 16, 00:13:02.023 "task_count": 2048, 00:13:02.023 "sequence_count": 2048, 00:13:02.023 "buf_count": 2048 00:13:02.023 } 00:13:02.023 } 00:13:02.023 ] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "bdev", 00:13:02.023 "config": [ 00:13:02.023 { 00:13:02.023 "method": "bdev_set_options", 00:13:02.023 "params": { 00:13:02.023 "bdev_io_pool_size": 65535, 00:13:02.023 "bdev_io_cache_size": 256, 00:13:02.023 "bdev_auto_examine": true, 00:13:02.023 "iobuf_small_cache_size": 128, 00:13:02.023 "iobuf_large_cache_size": 16 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "bdev_raid_set_options", 00:13:02.023 "params": { 00:13:02.023 "process_window_size_kb": 1024, 00:13:02.023 "process_max_bandwidth_mb_sec": 0 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "bdev_iscsi_set_options", 00:13:02.023 "params": { 00:13:02.023 "timeout_sec": 30 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "bdev_nvme_set_options", 00:13:02.023 "params": { 00:13:02.023 "action_on_timeout": "none", 00:13:02.023 "timeout_us": 0, 00:13:02.023 "timeout_admin_us": 0, 00:13:02.023 "keep_alive_timeout_ms": 10000, 00:13:02.023 "arbitration_burst": 0, 00:13:02.023 "low_priority_weight": 0, 00:13:02.023 "medium_priority_weight": 0, 00:13:02.023 "high_priority_weight": 0, 00:13:02.023 "nvme_adminq_poll_period_us": 10000, 00:13:02.023 "nvme_ioq_poll_period_us": 0, 00:13:02.023 "io_queue_requests": 0, 00:13:02.023 "delay_cmd_submit": true, 00:13:02.023 "transport_retry_count": 4, 00:13:02.023 "bdev_retry_count": 3, 00:13:02.023 "transport_ack_timeout": 0, 00:13:02.023 "ctrlr_loss_timeout_sec": 0, 00:13:02.023 "reconnect_delay_sec": 0, 00:13:02.023 "fast_io_fail_timeout_sec": 0, 00:13:02.023 "disable_auto_failback": false, 00:13:02.023 "generate_uuids": false, 00:13:02.023 "transport_tos": 0, 00:13:02.023 "nvme_error_stat": false, 00:13:02.023 "rdma_srq_size": 0, 00:13:02.023 "io_path_stat": false, 00:13:02.023 "allow_accel_sequence": false, 00:13:02.023 "rdma_max_cq_size": 0, 00:13:02.023 "rdma_cm_event_timeout_ms": 0, 00:13:02.023 "dhchap_digests": [ 00:13:02.023 "sha256", 00:13:02.023 "sha384", 00:13:02.023 "sha512" 00:13:02.023 ], 00:13:02.023 "dhchap_dhgroups": [ 00:13:02.023 "null", 00:13:02.023 "ffdhe2048", 00:13:02.023 "ffdhe3072", 00:13:02.023 "ffdhe4096", 00:13:02.023 "ffdhe6144", 00:13:02.023 "ffdhe8192" 00:13:02.023 ] 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "bdev_nvme_set_hotplug", 00:13:02.023 "params": { 00:13:02.023 "period_us": 100000, 00:13:02.023 "enable": false 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "bdev_malloc_create", 00:13:02.023 "params": { 00:13:02.023 "name": "malloc0", 00:13:02.023 "num_blocks": 8192, 00:13:02.023 "block_size": 4096, 00:13:02.023 "physical_block_size": 4096, 00:13:02.023 "uuid": "2fe8481e-4f13-4fc1-922d-31aac49e506b", 00:13:02.023 "optimal_io_boundary": 0, 00:13:02.023 "md_size": 0, 00:13:02.023 "dif_type": 0, 00:13:02.023 "dif_is_head_of_md": false, 00:13:02.023 "dif_pi_format": 0 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "bdev_wait_for_examine" 00:13:02.023 } 00:13:02.023 ] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "nbd", 00:13:02.023 "config": [] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "scheduler", 00:13:02.023 "config": [ 00:13:02.023 { 00:13:02.023 "method": "framework_set_scheduler", 00:13:02.023 "params": { 00:13:02.023 "name": "static" 00:13:02.023 } 00:13:02.023 } 00:13:02.023 ] 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "subsystem": "nvmf", 00:13:02.023 "config": [ 00:13:02.023 { 00:13:02.023 "method": "nvmf_set_config", 00:13:02.023 "params": { 00:13:02.023 "discovery_filter": "match_any", 00:13:02.023 "admin_cmd_passthru": { 00:13:02.023 "identify_ctrlr": false 00:13:02.023 }, 00:13:02.023 "dhchap_digests": [ 00:13:02.023 "sha256", 00:13:02.023 "sha384", 00:13:02.023 "sha512" 00:13:02.023 ], 00:13:02.023 "dhchap_dhgroups": [ 00:13:02.023 "null", 00:13:02.023 "ffdhe2048", 00:13:02.023 "ffdhe3072", 00:13:02.023 "ffdhe4096", 00:13:02.023 "ffdhe6144", 00:13:02.023 "ffdhe8192" 00:13:02.023 ] 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "nvmf_set_max_subsystems", 00:13:02.023 "params": { 00:13:02.023 "max_subsystems": 1024 00:13:02.023 } 00:13:02.023 }, 00:13:02.023 { 00:13:02.023 "method": "nvmf_set_crdt", 00:13:02.023 "params": { 00:13:02.024 "crdt1": 0, 00:13:02.024 "crdt2": 0, 00:13:02.024 "crdt3": 0 00:13:02.024 } 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "method": "nvmf_create_transport", 00:13:02.024 "params": { 00:13:02.024 "trtype": "TCP", 00:13:02.024 "max_queue_depth": 128, 00:13:02.024 "max_io_qpairs_per_ctrlr": 127, 00:13:02.024 "in_capsule_data_size": 4096, 00:13:02.024 "max_io_size": 131072, 00:13:02.024 "io_unit_size": 131072, 00:13:02.024 "max_aq_depth": 128, 00:13:02.024 "num_shared_buffers": 511, 00:13:02.024 "buf_cache_size": 4294967295, 00:13:02.024 "dif_insert_or_strip": false, 00:13:02.024 "zcopy": false, 00:13:02.024 "c2h_success": false, 00:13:02.024 "sock_priority": 0, 00:13:02.024 "abort_timeout_sec": 1, 00:13:02.024 "ack_timeout": 0, 00:13:02.024 "data_wr_pool_size": 0 00:13:02.024 } 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "method": "nvmf_create_subsystem", 00:13:02.024 "params": { 00:13:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.024 "allow_any_host": false, 00:13:02.024 "serial_number": "00000000000000000000", 00:13:02.024 "model_number": "SPDK bdev Controller", 00:13:02.024 "max_namespaces": 32, 00:13:02.024 "min_cntlid": 1, 00:13:02.024 "max_cntlid": 65519, 00:13:02.024 "ana_reporting": false 00:13:02.024 } 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "method": "nvmf_subsystem_add_host", 00:13:02.024 "params": { 00:13:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.024 "host": "nqn.2016-06.io.spdk:host1", 00:13:02.024 "psk": "key0" 00:13:02.024 } 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "method": "nvmf_subsystem_add_ns", 00:13:02.024 "params": { 00:13:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.024 "namespace": { 00:13:02.024 "nsid": 1, 00:13:02.024 "bdev_name": "malloc0", 00:13:02.024 "nguid": "2FE8481E4F134FC1922D31AAC49E506B", 00:13:02.024 "uuid": "2fe8481e-4f13-4fc1-922d-31aac49e506b", 00:13:02.024 "no_auto_visible": false 00:13:02.024 } 00:13:02.024 } 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "method": "nvmf_subsystem_add_listener", 00:13:02.024 "params": { 00:13:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.024 "listen_address": { 00:13:02.024 "trtype": "TCP", 00:13:02.024 "adrfam": "IPv4", 00:13:02.024 "traddr": "10.0.0.3", 00:13:02.024 "trsvcid": "4420" 00:13:02.024 }, 00:13:02.024 "secure_channel": false, 00:13:02.024 "sock_impl": "ssl" 00:13:02.024 } 00:13:02.024 } 00:13:02.024 ] 00:13:02.024 } 00:13:02.024 ] 00:13:02.024 }' 00:13:02.024 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:02.284 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:13:02.284 "subsystems": [ 00:13:02.284 { 00:13:02.284 "subsystem": "keyring", 00:13:02.284 "config": [ 00:13:02.284 { 00:13:02.284 "method": "keyring_file_add_key", 00:13:02.284 "params": { 00:13:02.284 "name": "key0", 00:13:02.284 "path": "/tmp/tmp.fCevzAXk6f" 00:13:02.284 } 00:13:02.284 } 00:13:02.284 ] 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "subsystem": "iobuf", 00:13:02.284 "config": [ 00:13:02.284 { 00:13:02.284 "method": "iobuf_set_options", 00:13:02.284 "params": { 00:13:02.284 "small_pool_count": 8192, 00:13:02.284 "large_pool_count": 1024, 00:13:02.284 "small_bufsize": 8192, 00:13:02.284 "large_bufsize": 135168, 00:13:02.284 "enable_numa": false 00:13:02.284 } 00:13:02.284 } 00:13:02.284 ] 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "subsystem": "sock", 00:13:02.284 "config": [ 00:13:02.284 { 00:13:02.284 "method": "sock_set_default_impl", 00:13:02.284 "params": { 00:13:02.284 "impl_name": "uring" 00:13:02.284 } 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "method": "sock_impl_set_options", 00:13:02.284 "params": { 00:13:02.284 "impl_name": "ssl", 00:13:02.284 "recv_buf_size": 4096, 00:13:02.284 "send_buf_size": 4096, 00:13:02.284 "enable_recv_pipe": true, 00:13:02.284 "enable_quickack": false, 00:13:02.284 "enable_placement_id": 0, 00:13:02.284 "enable_zerocopy_send_server": true, 00:13:02.284 "enable_zerocopy_send_client": false, 00:13:02.284 "zerocopy_threshold": 0, 00:13:02.284 "tls_version": 0, 00:13:02.284 "enable_ktls": false 00:13:02.284 } 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "method": "sock_impl_set_options", 00:13:02.284 "params": { 00:13:02.284 "impl_name": "posix", 00:13:02.284 "recv_buf_size": 2097152, 00:13:02.284 "send_buf_size": 2097152, 00:13:02.284 "enable_recv_pipe": true, 00:13:02.284 "enable_quickack": false, 00:13:02.284 "enable_placement_id": 0, 00:13:02.284 "enable_zerocopy_send_server": true, 00:13:02.284 "enable_zerocopy_send_client": false, 00:13:02.284 "zerocopy_threshold": 0, 00:13:02.284 "tls_version": 0, 00:13:02.284 "enable_ktls": false 00:13:02.284 } 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "method": "sock_impl_set_options", 00:13:02.284 "params": { 00:13:02.284 "impl_name": "uring", 00:13:02.284 "recv_buf_size": 2097152, 00:13:02.284 "send_buf_size": 2097152, 00:13:02.284 "enable_recv_pipe": true, 00:13:02.284 "enable_quickack": false, 00:13:02.284 "enable_placement_id": 0, 00:13:02.284 "enable_zerocopy_send_server": false, 00:13:02.284 "enable_zerocopy_send_client": false, 00:13:02.284 "zerocopy_threshold": 0, 00:13:02.284 "tls_version": 0, 00:13:02.284 "enable_ktls": false 00:13:02.284 } 00:13:02.284 } 00:13:02.284 ] 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "subsystem": "vmd", 00:13:02.284 "config": [] 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "subsystem": "accel", 00:13:02.284 "config": [ 00:13:02.284 { 00:13:02.284 "method": "accel_set_options", 00:13:02.284 "params": { 00:13:02.284 "small_cache_size": 128, 00:13:02.284 "large_cache_size": 16, 00:13:02.284 "task_count": 2048, 00:13:02.284 "sequence_count": 2048, 00:13:02.284 "buf_count": 2048 00:13:02.284 } 00:13:02.284 } 00:13:02.284 ] 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "subsystem": "bdev", 00:13:02.284 "config": [ 00:13:02.284 { 00:13:02.284 "method": "bdev_set_options", 00:13:02.284 "params": { 00:13:02.284 "bdev_io_pool_size": 65535, 00:13:02.284 "bdev_io_cache_size": 256, 00:13:02.284 "bdev_auto_examine": true, 00:13:02.284 "iobuf_small_cache_size": 128, 00:13:02.284 "iobuf_large_cache_size": 16 00:13:02.284 } 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "method": "bdev_raid_set_options", 00:13:02.284 "params": { 00:13:02.284 "process_window_size_kb": 1024, 00:13:02.284 "process_max_bandwidth_mb_sec": 0 00:13:02.284 } 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "method": "bdev_iscsi_set_options", 00:13:02.284 "params": { 00:13:02.284 "timeout_sec": 30 00:13:02.284 } 00:13:02.284 }, 00:13:02.284 { 00:13:02.284 "method": "bdev_nvme_set_options", 00:13:02.284 "params": { 00:13:02.284 "action_on_timeout": "none", 00:13:02.284 "timeout_us": 0, 00:13:02.284 "timeout_admin_us": 0, 00:13:02.284 "keep_alive_timeout_ms": 10000, 00:13:02.284 "arbitration_burst": 0, 00:13:02.284 "low_priority_weight": 0, 00:13:02.284 "medium_priority_weight": 0, 00:13:02.284 "high_priority_weight": 0, 00:13:02.284 "nvme_adminq_poll_period_us": 10000, 00:13:02.285 "nvme_ioq_poll_period_us": 0, 00:13:02.285 "io_queue_requests": 512, 00:13:02.285 "delay_cmd_submit": true, 00:13:02.285 "transport_retry_count": 4, 00:13:02.285 "bdev_retry_count": 3, 00:13:02.285 "transport_ack_timeout": 0, 00:13:02.285 "ctrlr_loss_timeout_sec": 0, 00:13:02.285 "reconnect_delay_sec": 0, 00:13:02.285 "fast_io_fail_timeout_sec": 0, 00:13:02.285 "disable_auto_failback": false, 00:13:02.285 "generate_uuids": false, 00:13:02.285 "transport_tos": 0, 00:13:02.285 "nvme_error_stat": false, 00:13:02.285 "rdma_srq_size": 0, 00:13:02.285 "io_path_stat": false, 00:13:02.285 "allow_accel_sequence": false, 00:13:02.285 "rdma_max_cq_size": 0, 00:13:02.285 "rdma_cm_event_timeout_ms": 0, 00:13:02.285 "dhchap_digests": [ 00:13:02.285 "sha256", 00:13:02.285 "sha384", 00:13:02.285 "sha512" 00:13:02.285 ], 00:13:02.285 "dhchap_dhgroups": [ 00:13:02.285 "null", 00:13:02.285 "ffdhe2048", 00:13:02.285 "ffdhe3072", 00:13:02.285 "ffdhe4096", 00:13:02.285 "ffdhe6144", 00:13:02.285 "ffdhe8192" 00:13:02.285 ] 00:13:02.285 } 00:13:02.285 }, 00:13:02.285 { 00:13:02.285 "method": "bdev_nvme_attach_controller", 00:13:02.285 "params": { 00:13:02.285 "name": "nvme0", 00:13:02.285 "trtype": "TCP", 00:13:02.285 "adrfam": "IPv4", 00:13:02.285 "traddr": "10.0.0.3", 00:13:02.285 "trsvcid": "4420", 00:13:02.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.285 "prchk_reftag": false, 00:13:02.285 "prchk_guard": false, 00:13:02.285 "ctrlr_loss_timeout_sec": 0, 00:13:02.285 "reconnect_delay_sec": 0, 00:13:02.285 "fast_io_fail_timeout_sec": 0, 00:13:02.285 "psk": "key0", 00:13:02.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:02.285 "hdgst": false, 00:13:02.285 "ddgst": false, 00:13:02.285 "multipath": "multipath" 00:13:02.285 } 00:13:02.285 }, 00:13:02.285 { 00:13:02.285 "method": "bdev_nvme_set_hotplug", 00:13:02.285 "params": { 00:13:02.285 "period_us": 100000, 00:13:02.285 "enable": false 00:13:02.285 } 00:13:02.285 }, 00:13:02.285 { 00:13:02.285 "method": "bdev_enable_histogram", 00:13:02.285 "params": { 00:13:02.285 "name": "nvme0n1", 00:13:02.285 "enable": true 00:13:02.285 } 00:13:02.285 }, 00:13:02.285 { 00:13:02.285 "method": "bdev_wait_for_examine" 00:13:02.285 } 00:13:02.285 ] 00:13:02.285 }, 00:13:02.285 { 00:13:02.285 "subsystem": "nbd", 00:13:02.285 "config": [] 00:13:02.285 } 00:13:02.285 ] 00:13:02.285 }' 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 71828 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71828 ']' 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71828 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71828 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:02.285 killing process with pid 71828 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71828' 00:13:02.285 Received shutdown signal, test time was about 1.000000 seconds 00:13:02.285 00:13:02.285 Latency(us) 00:13:02.285 [2024-11-15T12:48:10.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.285 [2024-11-15T12:48:10.955Z] =================================================================================================================== 00:13:02.285 [2024-11-15T12:48:10.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71828 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71828 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 71803 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71803 ']' 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71803 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.285 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71803 00:13:02.545 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.545 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.545 killing process with pid 71803 00:13:02.545 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71803' 00:13:02.545 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71803 00:13:02.545 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71803 00:13:02.545 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:13:02.545 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.545 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.545 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:13:02.545 "subsystems": [ 00:13:02.545 { 00:13:02.545 "subsystem": "keyring", 00:13:02.545 "config": [ 00:13:02.545 { 00:13:02.545 "method": "keyring_file_add_key", 00:13:02.545 "params": { 00:13:02.545 "name": "key0", 00:13:02.545 "path": "/tmp/tmp.fCevzAXk6f" 00:13:02.545 } 00:13:02.545 } 00:13:02.545 ] 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "subsystem": "iobuf", 00:13:02.545 "config": [ 00:13:02.545 { 00:13:02.545 "method": "iobuf_set_options", 00:13:02.545 "params": { 00:13:02.545 "small_pool_count": 8192, 00:13:02.545 "large_pool_count": 1024, 00:13:02.545 "small_bufsize": 8192, 00:13:02.545 "large_bufsize": 135168, 00:13:02.545 "enable_numa": false 00:13:02.545 } 00:13:02.545 } 00:13:02.545 ] 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "subsystem": "sock", 00:13:02.545 "config": [ 00:13:02.545 { 00:13:02.545 "method": "sock_set_default_impl", 00:13:02.545 "params": { 00:13:02.545 "impl_name": "uring" 00:13:02.545 } 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "method": "sock_impl_set_options", 00:13:02.545 "params": { 00:13:02.545 "impl_name": "ssl", 00:13:02.545 "recv_buf_size": 4096, 00:13:02.545 "send_buf_size": 4096, 00:13:02.545 "enable_recv_pipe": true, 00:13:02.545 "enable_quickack": false, 00:13:02.545 "enable_placement_id": 0, 00:13:02.545 "enable_zerocopy_send_server": true, 00:13:02.545 "enable_zerocopy_send_client": false, 00:13:02.545 "zerocopy_threshold": 0, 00:13:02.545 "tls_version": 0, 00:13:02.545 "enable_ktls": false 00:13:02.545 } 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "method": "sock_impl_set_options", 00:13:02.545 "params": { 00:13:02.545 "impl_name": "posix", 00:13:02.545 "recv_buf_size": 2097152, 00:13:02.545 "send_buf_size": 2097152, 00:13:02.545 "enable_recv_pipe": true, 00:13:02.545 "enable_quickack": false, 00:13:02.545 "enable_placement_id": 0, 00:13:02.545 "enable_zerocopy_send_server": true, 00:13:02.545 "enable_zerocopy_send_client": false, 00:13:02.545 "zerocopy_threshold": 0, 00:13:02.545 "tls_version": 0, 00:13:02.545 "enable_ktls": false 00:13:02.545 } 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "method": "sock_impl_set_options", 00:13:02.545 "params": { 00:13:02.545 "impl_name": "uring", 00:13:02.545 "recv_buf_size": 2097152, 00:13:02.545 "send_buf_size": 2097152, 00:13:02.545 "enable_recv_pipe": true, 00:13:02.545 "enable_quickack": false, 00:13:02.545 "enable_placement_id": 0, 00:13:02.545 "enable_zerocopy_send_server": false, 00:13:02.545 "enable_zerocopy_send_client": false, 00:13:02.545 "zerocopy_threshold": 0, 00:13:02.545 "tls_version": 0, 00:13:02.545 "enable_ktls": false 00:13:02.545 } 00:13:02.545 } 00:13:02.545 ] 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "subsystem": "vmd", 00:13:02.545 "config": [] 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "subsystem": "accel", 00:13:02.545 "config": [ 00:13:02.545 { 00:13:02.545 "method": "accel_set_options", 00:13:02.545 "params": { 00:13:02.545 "small_cache_size": 128, 00:13:02.545 "large_cache_size": 16, 00:13:02.545 "task_count": 2048, 00:13:02.545 "sequence_count": 2048, 00:13:02.545 "buf_count": 2048 00:13:02.545 } 00:13:02.545 } 00:13:02.545 ] 00:13:02.545 }, 00:13:02.545 { 00:13:02.545 "subsystem": "bdev", 00:13:02.545 "config": [ 00:13:02.545 { 00:13:02.545 "method": "bdev_set_options", 00:13:02.545 "params": { 00:13:02.546 "bdev_io_pool_size": 65535, 00:13:02.546 "bdev_io_cache_size": 256, 00:13:02.546 "bdev_auto_examine": true, 00:13:02.546 "iobuf_small_cache_size": 128, 00:13:02.546 "iobuf_large_cache_size": 16 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "bdev_raid_set_options", 00:13:02.546 "params": { 00:13:02.546 "process_window_size_kb": 1024, 00:13:02.546 "process_max_bandwidth_mb_sec": 0 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "bdev_iscsi_set_options", 00:13:02.546 "params": { 00:13:02.546 "timeout_sec": 30 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "bdev_nvme_set_options", 00:13:02.546 "params": { 00:13:02.546 "action_on_timeout": "none", 00:13:02.546 "timeout_us": 0, 00:13:02.546 "timeout_admin_us": 0, 00:13:02.546 "keep_alive_timeout_ms": 10000, 00:13:02.546 "arbitration_burst": 0, 00:13:02.546 "low_priority_weight": 0, 00:13:02.546 "medium_priority_weight": 0, 00:13:02.546 "high_priority_weight": 0, 00:13:02.546 "nvme_adminq_poll_period_us": 10000, 00:13:02.546 "nvme_ioq_poll_period_us": 0, 00:13:02.546 "io_queue_requests": 0, 00:13:02.546 "delay_cmd_submit": true, 00:13:02.546 "transport_retry_count": 4, 00:13:02.546 "bdev_retry_count": 3, 00:13:02.546 "transport_ack_timeout": 0, 00:13:02.546 "ctrlr_loss_timeout_sec": 0, 00:13:02.546 "reconnect_delay_sec": 0, 00:13:02.546 "fast_io_fail_timeout_sec": 0, 00:13:02.546 "disable_auto_failback": false, 00:13:02.546 "generate_uuids": false, 00:13:02.546 "transport_tos": 0, 00:13:02.546 "nvme_error_stat": false, 00:13:02.546 "rdma_srq_size": 0, 00:13:02.546 "io_path_stat": false, 00:13:02.546 "allow_accel_sequence": false, 00:13:02.546 "rdma_max_cq_size": 0, 00:13:02.546 "rdma_cm_event_timeout_ms": 0, 00:13:02.546 "dhchap_digests": [ 00:13:02.546 "sha256", 00:13:02.546 "sha384", 00:13:02.546 "sha512" 00:13:02.546 ], 00:13:02.546 "dhchap_dhgroups": [ 00:13:02.546 "null", 00:13:02.546 "ffdhe2048", 00:13:02.546 "ffdhe3072", 00:13:02.546 "ffdhe4096", 00:13:02.546 "ffdhe6144", 00:13:02.546 "ffdhe8192" 00:13:02.546 ] 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "bdev_nvme_set_hotplug", 00:13:02.546 "params": { 00:13:02.546 "period_us": 100000, 00:13:02.546 "enable": false 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "bdev_malloc_create", 00:13:02.546 "params": { 00:13:02.546 "name": "malloc0", 00:13:02.546 "num_blocks": 8192, 00:13:02.546 "block_size": 4096, 00:13:02.546 "physical_block_size": 4096, 00:13:02.546 "uuid": "2fe8481e-4f13-4fc1-922d-31aac49e506b", 00:13:02.546 "optimal_io_boundary": 0, 00:13:02.546 "md_size": 0, 00:13:02.546 "dif_type": 0, 00:13:02.546 "dif_is_head_of_md": false, 00:13:02.546 "dif_pi_format": 0 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "bdev_wait_for_examine" 00:13:02.546 } 00:13:02.546 ] 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "subsystem": "nbd", 00:13:02.546 "config": [] 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "subsystem": "scheduler", 00:13:02.546 "config": [ 00:13:02.546 { 00:13:02.546 "method": "framework_set_scheduler", 00:13:02.546 "params": { 00:13:02.546 "name": "static" 00:13:02.546 } 00:13:02.546 } 00:13:02.546 ] 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "subsystem": "nvmf", 00:13:02.546 "config": [ 00:13:02.546 { 00:13:02.546 "method": "nvmf_set_config", 00:13:02.546 "params": { 00:13:02.546 "discovery_filter": "match_any", 00:13:02.546 "admin_cmd_passthru": { 00:13:02.546 "identify_ctrlr": false 00:13:02.546 }, 00:13:02.546 "dhchap_digests": [ 00:13:02.546 "sha256", 00:13:02.546 "sha384", 00:13:02.546 "sha512" 00:13:02.546 ], 00:13:02.546 "dhchap_dhgroups": [ 00:13:02.546 "null", 00:13:02.546 "ffdhe2048", 00:13:02.546 "ffdhe3072", 00:13:02.546 "ffdhe4096", 00:13:02.546 "ffdhe6144", 00:13:02.546 "ffdhe8192" 00:13:02.546 ] 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "nvmf_set_max_subsystems", 00:13:02.546 "params": { 00:13:02.546 "max_subsystems": 1024 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "nvmf_set_crdt", 00:13:02.546 "params": { 00:13:02.546 "crdt1": 0, 00:13:02.546 "crdt2": 0, 00:13:02.546 "crdt3": 0 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "nvmf_create_transport", 00:13:02.546 "params": { 00:13:02.546 "trtype": "TCP", 00:13:02.546 "max_queue_depth": 128, 00:13:02.546 "max_io_qpairs_per_ctrlr": 127, 00:13:02.546 "in_capsule_data_size": 4096, 00:13:02.546 "max_io_size": 131072, 00:13:02.546 "io_unit_size": 131072, 00:13:02.546 "max_aq_depth": 128, 00:13:02.546 "num_shared_buffers": 511, 00:13:02.546 "buf_cache_size": 4294967295, 00:13:02.546 "dif_insert_or_strip": false, 00:13:02.546 "zcopy": false, 00:13:02.546 "c2h_success": false, 00:13:02.546 "sock_priority": 0, 00:13:02.546 "abort_timeout_sec": 1, 00:13:02.546 "ack_timeout": 0, 00:13:02.546 "data_wr_pool_size": 0 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "nvmf_create_subsystem", 00:13:02.546 "params": { 00:13:02.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.546 "allow_any_host": false, 00:13:02.546 "serial_number": "00000000000000000000", 00:13:02.546 "model_number": "SPDK bdev Controller", 00:13:02.546 "max_namespaces": 32, 00:13:02.546 "min_cntlid": 1, 00:13:02.546 "max_cntlid": 65519, 00:13:02.546 "ana_reporting": false 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "nvmf_subsystem_add_host", 00:13:02.546 "params": { 00:13:02.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.546 "host": "nqn.2016-06.io.spdk:host1", 00:13:02.546 "psk": "key0" 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "nvmf_subsystem_add_ns", 00:13:02.546 "params": { 00:13:02.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.546 "namespace": { 00:13:02.546 "nsid": 1, 00:13:02.546 "bdev_name": "malloc0", 00:13:02.546 "nguid": "2FE8481E4F134FC1922D31AAC49E506B", 00:13:02.546 "uuid": "2fe8481e-4f13-4fc1-922d-31aac49e506b", 00:13:02.546 "no_auto_visible": false 00:13:02.546 } 00:13:02.546 } 00:13:02.546 }, 00:13:02.546 { 00:13:02.546 "method": "nvmf_subsystem_add_listener", 00:13:02.546 "params": { 00:13:02.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.546 "listen_address": { 00:13:02.546 "trtype": "TCP", 00:13:02.546 "adrfam": "IPv4", 00:13:02.546 "traddr": "10.0.0.3", 00:13:02.546 "trsvcid": "4420" 00:13:02.546 }, 00:13:02.546 "secure_channel": false, 00:13:02.546 "sock_impl": "ssl" 00:13:02.546 } 00:13:02.546 } 00:13:02.546 ] 00:13:02.546 } 00:13:02.546 ] 00:13:02.546 }' 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71875 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71875 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71875 ']' 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.546 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.546 [2024-11-15 12:48:11.162163] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:02.546 [2024-11-15 12:48:11.162245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.805 [2024-11-15 12:48:11.305906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.805 [2024-11-15 12:48:11.331429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.805 [2024-11-15 12:48:11.331491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.805 [2024-11-15 12:48:11.331501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.805 [2024-11-15 12:48:11.331507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.806 [2024-11-15 12:48:11.331514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.806 [2024-11-15 12:48:11.331838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.806 [2024-11-15 12:48:11.470702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:03.065 [2024-11-15 12:48:11.525502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.065 [2024-11-15 12:48:11.557482] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:03.065 [2024-11-15 12:48:11.557716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=71908 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 71908 /var/tmp/bdevperf.sock 00:13:03.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71908 ']' 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:03.633 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:13:03.633 "subsystems": [ 00:13:03.633 { 00:13:03.633 "subsystem": "keyring", 00:13:03.633 "config": [ 00:13:03.633 { 00:13:03.633 "method": "keyring_file_add_key", 00:13:03.633 "params": { 00:13:03.633 "name": "key0", 00:13:03.633 "path": "/tmp/tmp.fCevzAXk6f" 00:13:03.633 } 00:13:03.633 } 00:13:03.633 ] 00:13:03.633 }, 00:13:03.633 { 00:13:03.633 "subsystem": "iobuf", 00:13:03.633 "config": [ 00:13:03.633 { 00:13:03.633 "method": "iobuf_set_options", 00:13:03.633 "params": { 00:13:03.633 "small_pool_count": 8192, 00:13:03.633 "large_pool_count": 1024, 00:13:03.633 "small_bufsize": 8192, 00:13:03.633 "large_bufsize": 135168, 00:13:03.633 "enable_numa": false 00:13:03.633 } 00:13:03.633 } 00:13:03.633 ] 00:13:03.633 }, 00:13:03.633 { 00:13:03.633 "subsystem": "sock", 00:13:03.633 "config": [ 00:13:03.633 { 00:13:03.633 "method": "sock_set_default_impl", 00:13:03.633 "params": { 00:13:03.633 "impl_name": "uring" 00:13:03.633 } 00:13:03.633 }, 00:13:03.633 { 00:13:03.633 "method": "sock_impl_set_options", 00:13:03.633 "params": { 00:13:03.634 "impl_name": "ssl", 00:13:03.634 "recv_buf_size": 4096, 00:13:03.634 "send_buf_size": 4096, 00:13:03.634 "enable_recv_pipe": true, 00:13:03.634 "enable_quickack": false, 00:13:03.634 "enable_placement_id": 0, 00:13:03.634 "enable_zerocopy_send_server": true, 00:13:03.634 "enable_zerocopy_send_client": false, 00:13:03.634 "zerocopy_threshold": 0, 00:13:03.634 "tls_version": 0, 00:13:03.634 "enable_ktls": false 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "sock_impl_set_options", 00:13:03.634 "params": { 00:13:03.634 "impl_name": "posix", 00:13:03.634 "recv_buf_size": 2097152, 00:13:03.634 "send_buf_size": 2097152, 00:13:03.634 "enable_recv_pipe": true, 00:13:03.634 "enable_quickack": false, 00:13:03.634 "enable_placement_id": 0, 00:13:03.634 "enable_zerocopy_send_server": true, 00:13:03.634 "enable_zerocopy_send_client": false, 00:13:03.634 "zerocopy_threshold": 0, 00:13:03.634 "tls_version": 0, 00:13:03.634 "enable_ktls": false 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "sock_impl_set_options", 00:13:03.634 "params": { 00:13:03.634 "impl_name": "uring", 00:13:03.634 "recv_buf_size": 2097152, 00:13:03.634 "send_buf_size": 2097152, 00:13:03.634 "enable_recv_pipe": true, 00:13:03.634 "enable_quickack": false, 00:13:03.634 "enable_placement_id": 0, 00:13:03.634 "enable_zerocopy_send_server": false, 00:13:03.634 "enable_zerocopy_send_client": false, 00:13:03.634 "zerocopy_threshold": 0, 00:13:03.634 "tls_version": 0, 00:13:03.634 "enable_ktls": false 00:13:03.634 } 00:13:03.634 } 00:13:03.634 ] 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "subsystem": "vmd", 00:13:03.634 "config": [] 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "subsystem": "accel", 00:13:03.634 "config": [ 00:13:03.634 { 00:13:03.634 "method": "accel_set_options", 00:13:03.634 "params": { 00:13:03.634 "small_cache_size": 128, 00:13:03.634 "large_cache_size": 16, 00:13:03.634 "task_count": 2048, 00:13:03.634 "sequence_count": 2048, 00:13:03.634 "buf_count": 2048 00:13:03.634 } 00:13:03.634 } 00:13:03.634 ] 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "subsystem": "bdev", 00:13:03.634 "config": [ 00:13:03.634 { 00:13:03.634 "method": "bdev_set_options", 00:13:03.634 "params": { 00:13:03.634 "bdev_io_pool_size": 65535, 00:13:03.634 "bdev_io_cache_size": 256, 00:13:03.634 "bdev_auto_examine": true, 00:13:03.634 "iobuf_small_cache_size": 128, 00:13:03.634 "iobuf_large_cache_size": 16 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "bdev_raid_set_options", 00:13:03.634 "params": { 00:13:03.634 "process_window_size_kb": 1024, 00:13:03.634 "process_max_bandwidth_mb_sec": 0 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "bdev_iscsi_set_options", 00:13:03.634 "params": { 00:13:03.634 "timeout_sec": 30 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "bdev_nvme_set_options", 00:13:03.634 "params": { 00:13:03.634 "action_on_timeout": "none", 00:13:03.634 "timeout_us": 0, 00:13:03.634 "timeout_admin_us": 0, 00:13:03.634 "keep_alive_timeout_ms": 10000, 00:13:03.634 "arbitration_burst": 0, 00:13:03.634 "low_priority_weight": 0, 00:13:03.634 "medium_priority_weight": 0, 00:13:03.634 "high_priority_weight": 0, 00:13:03.634 "nvme_adminq_poll_period_us": 10000, 00:13:03.634 "nvme_ioq_poll_period_us": 0, 00:13:03.634 "io_queue_requests": 512, 00:13:03.634 "delay_cmd_submit": true, 00:13:03.634 "transport_retry_count": 4, 00:13:03.634 "bdev_retry_count": 3, 00:13:03.634 "transport_ack_timeout": 0, 00:13:03.634 "ctrlr_loss_timeout_sec": 0, 00:13:03.634 "reconnect_delay_sec": 0, 00:13:03.634 "fast_io_fail_timeout_sec": 0, 00:13:03.634 "disable_auto_failback": false, 00:13:03.634 "generate_uuids": false, 00:13:03.634 "transport_tos": 0, 00:13:03.634 "nvme_error_stat": false, 00:13:03.634 "rdma_srq_size": 0, 00:13:03.634 "io_path_stat": false, 00:13:03.634 "allow_accel_sequence": false, 00:13:03.634 "rdma_max_cq_size": 0, 00:13:03.634 "rdma_cm_event_timeout_ms": 0, 00:13:03.634 "dhchap_digests": [ 00:13:03.634 "sha256", 00:13:03.634 "sha384", 00:13:03.634 "sha512" 00:13:03.634 ], 00:13:03.634 "dhchap_dhgroups": [ 00:13:03.634 "null", 00:13:03.634 "ffdhe2048", 00:13:03.634 "ffdhe3072", 00:13:03.634 "ffdhe4096", 00:13:03.634 "ffdhe6144", 00:13:03.634 "ffdhe8192" 00:13:03.634 ] 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "bdev_nvme_attach_controller", 00:13:03.634 "params": { 00:13:03.634 "name": "nvme0", 00:13:03.634 "trtype": "TCP", 00:13:03.634 "adrfam": "IPv4", 00:13:03.634 "traddr": "10.0.0.3", 00:13:03.634 "trsvcid": "4420", 00:13:03.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.634 "prchk_reftag": false, 00:13:03.634 "prchk_guard": false, 00:13:03.634 "ctrlr_loss_timeout_sec": 0, 00:13:03.634 "reconnect_delay_sec": 0, 00:13:03.634 "fast_io_fail_timeout_sec": 0, 00:13:03.634 "psk": "key0", 00:13:03.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.634 "hdgst": false, 00:13:03.634 "ddgst": false, 00:13:03.634 "multipath": "multipath" 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "bdev_nvme_set_hotplug", 00:13:03.634 "params": { 00:13:03.634 "period_us": 100000, 00:13:03.634 "enable": false 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "bdev_enable_histogram", 00:13:03.634 "params": { 00:13:03.634 "name": "nvme0n1", 00:13:03.634 "enable": true 00:13:03.634 } 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "method": "bdev_wait_for_examine" 00:13:03.634 } 00:13:03.634 ] 00:13:03.634 }, 00:13:03.634 { 00:13:03.634 "subsystem": "nbd", 00:13:03.634 "config": [] 00:13:03.634 } 00:13:03.634 ] 00:13:03.634 }' 00:13:03.634 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:03.634 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.634 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:03.634 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.634 12:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.634 [2024-11-15 12:48:12.254237] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:03.634 [2024-11-15 12:48:12.254317] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71908 ] 00:13:03.894 [2024-11-15 12:48:12.394014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.894 [2024-11-15 12:48:12.422059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.894 [2024-11-15 12:48:12.528807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:03.894 [2024-11-15 12:48:12.558234] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:04.831 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.831 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:04.831 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:04.831 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:13:04.831 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.831 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:05.089 Running I/O for 1 seconds... 00:13:06.048 4736.00 IOPS, 18.50 MiB/s 00:13:06.048 Latency(us) 00:13:06.048 [2024-11-15T12:48:14.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.048 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.048 Verification LBA range: start 0x0 length 0x2000 00:13:06.048 nvme0n1 : 1.01 4797.76 18.74 0.00 0.00 26439.39 10068.71 20614.05 00:13:06.048 [2024-11-15T12:48:14.718Z] =================================================================================================================== 00:13:06.048 [2024-11-15T12:48:14.718Z] Total : 4797.76 18.74 0.00 0.00 26439.39 10068.71 20614.05 00:13:06.048 { 00:13:06.048 "results": [ 00:13:06.048 { 00:13:06.048 "job": "nvme0n1", 00:13:06.048 "core_mask": "0x2", 00:13:06.048 "workload": "verify", 00:13:06.048 "status": "finished", 00:13:06.048 "verify_range": { 00:13:06.048 "start": 0, 00:13:06.048 "length": 8192 00:13:06.048 }, 00:13:06.048 "queue_depth": 128, 00:13:06.048 "io_size": 4096, 00:13:06.048 "runtime": 1.013806, 00:13:06.048 "iops": 4797.76209649578, 00:13:06.048 "mibps": 18.74125818943664, 00:13:06.048 "io_failed": 0, 00:13:06.048 "io_timeout": 0, 00:13:06.048 "avg_latency_us": 26439.392153110046, 00:13:06.048 "min_latency_us": 10068.712727272727, 00:13:06.048 "max_latency_us": 20614.05090909091 00:13:06.048 } 00:13:06.048 ], 00:13:06.048 "core_count": 1 00:13:06.048 } 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:06.048 nvmf_trace.0 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 71908 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71908 ']' 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71908 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71908 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:06.048 killing process with pid 71908 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71908' 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71908 00:13:06.048 Received shutdown signal, test time was about 1.000000 seconds 00:13:06.048 00:13:06.048 Latency(us) 00:13:06.048 [2024-11-15T12:48:14.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.048 [2024-11-15T12:48:14.718Z] =================================================================================================================== 00:13:06.048 [2024-11-15T12:48:14.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.048 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71908 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.320 rmmod nvme_tcp 00:13:06.320 rmmod nvme_fabrics 00:13:06.320 rmmod nvme_keyring 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 71875 ']' 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 71875 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71875 ']' 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71875 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71875 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.320 killing process with pid 71875 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71875' 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71875 00:13:06.320 12:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71875 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:06.579 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZLyf1hE94y /tmp/tmp.GrdLsCmJgo /tmp/tmp.fCevzAXk6f 00:13:06.838 00:13:06.838 real 1m19.191s 00:13:06.838 user 2m7.910s 00:13:06.838 sys 0m25.720s 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.838 ************************************ 00:13:06.838 END TEST nvmf_tls 00:13:06.838 ************************************ 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.838 12:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.839 ************************************ 00:13:06.839 START TEST nvmf_fips 00:13:06.839 ************************************ 00:13:06.839 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:06.839 * Looking for test storage... 00:13:06.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:06.839 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.839 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.839 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.100 --rc genhtml_branch_coverage=1 00:13:07.100 --rc genhtml_function_coverage=1 00:13:07.100 --rc genhtml_legend=1 00:13:07.100 --rc geninfo_all_blocks=1 00:13:07.100 --rc geninfo_unexecuted_blocks=1 00:13:07.100 00:13:07.100 ' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.100 --rc genhtml_branch_coverage=1 00:13:07.100 --rc genhtml_function_coverage=1 00:13:07.100 --rc genhtml_legend=1 00:13:07.100 --rc geninfo_all_blocks=1 00:13:07.100 --rc geninfo_unexecuted_blocks=1 00:13:07.100 00:13:07.100 ' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.100 --rc genhtml_branch_coverage=1 00:13:07.100 --rc genhtml_function_coverage=1 00:13:07.100 --rc genhtml_legend=1 00:13:07.100 --rc geninfo_all_blocks=1 00:13:07.100 --rc geninfo_unexecuted_blocks=1 00:13:07.100 00:13:07.100 ' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.100 --rc genhtml_branch_coverage=1 00:13:07.100 --rc genhtml_function_coverage=1 00:13:07.100 --rc genhtml_legend=1 00:13:07.100 --rc geninfo_all_blocks=1 00:13:07.100 --rc geninfo_unexecuted_blocks=1 00:13:07.100 00:13:07.100 ' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.100 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.100 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:13:07.101 Error setting digest 00:13:07.101 40822129497F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:07.101 40822129497F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:07.101 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:07.102 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:07.102 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:07.102 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:07.102 Cannot find device "nvmf_init_br" 00:13:07.102 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:07.102 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:07.361 Cannot find device "nvmf_init_br2" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:07.361 Cannot find device "nvmf_tgt_br" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:07.361 Cannot find device "nvmf_tgt_br2" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:07.361 Cannot find device "nvmf_init_br" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:07.361 Cannot find device "nvmf_init_br2" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:07.361 Cannot find device "nvmf_tgt_br" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:07.361 Cannot find device "nvmf_tgt_br2" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:07.361 Cannot find device "nvmf_br" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:07.361 Cannot find device "nvmf_init_if" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:07.361 Cannot find device "nvmf_init_if2" 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:07.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:07.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:07.361 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:07.361 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:07.361 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:07.361 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:07.361 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:07.361 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:07.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:07.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:07.621 00:13:07.621 --- 10.0.0.3 ping statistics --- 00:13:07.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.621 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:07.621 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:07.621 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:13:07.621 00:13:07.621 --- 10.0.0.4 ping statistics --- 00:13:07.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.621 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:07.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:07.621 00:13:07.621 --- 10.0.0.1 ping statistics --- 00:13:07.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.621 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:07.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:07.621 00:13:07.621 --- 10.0.0.2 ping statistics --- 00:13:07.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.621 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72218 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72218 00:13:07.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72218 ']' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.621 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:07.621 [2024-11-15 12:48:16.241011] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:07.621 [2024-11-15 12:48:16.241340] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.881 [2024-11-15 12:48:16.392697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.881 [2024-11-15 12:48:16.429511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.881 [2024-11-15 12:48:16.429849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.881 [2024-11-15 12:48:16.430020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.881 [2024-11-15 12:48:16.430179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.881 [2024-11-15 12:48:16.430224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.881 [2024-11-15 12:48:16.430599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.881 [2024-11-15 12:48:16.466270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.BeK 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.BeK 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.BeK 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.BeK 00:13:08.819 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.819 [2024-11-15 12:48:17.457331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.819 [2024-11-15 12:48:17.473297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:08.819 [2024-11-15 12:48:17.473593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:09.078 malloc0 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72260 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72260 /var/tmp/bdevperf.sock 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72260 ']' 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:09.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.078 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:09.078 [2024-11-15 12:48:17.592934] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:09.078 [2024-11-15 12:48:17.593199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72260 ] 00:13:09.078 [2024-11-15 12:48:17.737861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.337 [2024-11-15 12:48:17.778273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.337 [2024-11-15 12:48:17.813393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:09.337 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.337 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:09.337 12:48:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.BeK 00:13:09.596 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:09.855 [2024-11-15 12:48:18.359485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:09.855 TLSTESTn1 00:13:09.855 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:10.114 Running I/O for 10 seconds... 00:13:11.989 4608.00 IOPS, 18.00 MiB/s [2024-11-15T12:48:21.597Z] 4634.00 IOPS, 18.10 MiB/s [2024-11-15T12:48:22.975Z] 4684.67 IOPS, 18.30 MiB/s [2024-11-15T12:48:23.912Z] 4716.50 IOPS, 18.42 MiB/s [2024-11-15T12:48:24.850Z] 4737.60 IOPS, 18.51 MiB/s [2024-11-15T12:48:25.786Z] 4763.83 IOPS, 18.61 MiB/s [2024-11-15T12:48:26.725Z] 4771.29 IOPS, 18.64 MiB/s [2024-11-15T12:48:27.662Z] 4777.75 IOPS, 18.66 MiB/s [2024-11-15T12:48:28.600Z] 4784.22 IOPS, 18.69 MiB/s [2024-11-15T12:48:28.600Z] 4786.00 IOPS, 18.70 MiB/s 00:13:19.930 Latency(us) 00:13:19.930 [2024-11-15T12:48:28.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.930 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:19.930 Verification LBA range: start 0x0 length 0x2000 00:13:19.930 TLSTESTn1 : 10.01 4791.99 18.72 0.00 0.00 26666.93 4617.31 29550.78 00:13:19.930 [2024-11-15T12:48:28.600Z] =================================================================================================================== 00:13:19.930 [2024-11-15T12:48:28.600Z] Total : 4791.99 18.72 0.00 0.00 26666.93 4617.31 29550.78 00:13:19.930 { 00:13:19.930 "results": [ 00:13:19.930 { 00:13:19.930 "job": "TLSTESTn1", 00:13:19.930 "core_mask": "0x4", 00:13:19.930 "workload": "verify", 00:13:19.930 "status": "finished", 00:13:19.930 "verify_range": { 00:13:19.930 "start": 0, 00:13:19.930 "length": 8192 00:13:19.930 }, 00:13:19.930 "queue_depth": 128, 00:13:19.930 "io_size": 4096, 00:13:19.930 "runtime": 10.013383, 00:13:19.930 "iops": 4791.9868839532055, 00:13:19.930 "mibps": 18.71869876544221, 00:13:19.930 "io_failed": 0, 00:13:19.930 "io_timeout": 0, 00:13:19.930 "avg_latency_us": 26666.933874928007, 00:13:19.930 "min_latency_us": 4617.309090909091, 00:13:19.930 "max_latency_us": 29550.778181818183 00:13:19.930 } 00:13:19.930 ], 00:13:19.930 "core_count": 1 00:13:19.930 } 00:13:19.930 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:19.930 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:19.930 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:13:19.930 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:13:19.930 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:19.930 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:19.930 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:19.931 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:19.931 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:19.931 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:19.931 nvmf_trace.0 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72260 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72260 ']' 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72260 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72260 00:13:20.190 killing process with pid 72260 00:13:20.190 Received shutdown signal, test time was about 10.000000 seconds 00:13:20.190 00:13:20.190 Latency(us) 00:13:20.190 [2024-11-15T12:48:28.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.190 [2024-11-15T12:48:28.860Z] =================================================================================================================== 00:13:20.190 [2024-11-15T12:48:28.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72260' 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72260 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72260 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.190 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.450 rmmod nvme_tcp 00:13:20.450 rmmod nvme_fabrics 00:13:20.450 rmmod nvme_keyring 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72218 ']' 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72218 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72218 ']' 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72218 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72218 00:13:20.450 killing process with pid 72218 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72218' 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72218 00:13:20.450 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72218 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:20.450 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.709 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:13:20.710 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.BeK 00:13:20.710 ************************************ 00:13:20.710 END TEST nvmf_fips 00:13:20.710 ************************************ 00:13:20.710 00:13:20.710 real 0m13.973s 00:13:20.710 user 0m19.063s 00:13:20.710 sys 0m5.464s 00:13:20.710 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.710 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.970 ************************************ 00:13:20.970 START TEST nvmf_control_msg_list 00:13:20.970 ************************************ 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:20.970 * Looking for test storage... 00:13:20.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.970 --rc genhtml_branch_coverage=1 00:13:20.970 --rc genhtml_function_coverage=1 00:13:20.970 --rc genhtml_legend=1 00:13:20.970 --rc geninfo_all_blocks=1 00:13:20.970 --rc geninfo_unexecuted_blocks=1 00:13:20.970 00:13:20.970 ' 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.970 --rc genhtml_branch_coverage=1 00:13:20.970 --rc genhtml_function_coverage=1 00:13:20.970 --rc genhtml_legend=1 00:13:20.970 --rc geninfo_all_blocks=1 00:13:20.970 --rc geninfo_unexecuted_blocks=1 00:13:20.970 00:13:20.970 ' 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.970 --rc genhtml_branch_coverage=1 00:13:20.970 --rc genhtml_function_coverage=1 00:13:20.970 --rc genhtml_legend=1 00:13:20.970 --rc geninfo_all_blocks=1 00:13:20.970 --rc geninfo_unexecuted_blocks=1 00:13:20.970 00:13:20.970 ' 00:13:20.970 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.971 --rc genhtml_branch_coverage=1 00:13:20.971 --rc genhtml_function_coverage=1 00:13:20.971 --rc genhtml_legend=1 00:13:20.971 --rc geninfo_all_blocks=1 00:13:20.971 --rc geninfo_unexecuted_blocks=1 00:13:20.971 00:13:20.971 ' 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.971 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:21.231 Cannot find device "nvmf_init_br" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:21.231 Cannot find device "nvmf_init_br2" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:21.231 Cannot find device "nvmf_tgt_br" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:21.231 Cannot find device "nvmf_tgt_br2" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:21.231 Cannot find device "nvmf_init_br" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:21.231 Cannot find device "nvmf_init_br2" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:21.231 Cannot find device "nvmf_tgt_br" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:21.231 Cannot find device "nvmf_tgt_br2" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:21.231 Cannot find device "nvmf_br" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:21.231 Cannot find device "nvmf_init_if" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:21.231 Cannot find device "nvmf_init_if2" 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:21.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:21.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:21.231 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:21.232 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:21.491 12:48:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:21.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:21.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:21.491 00:13:21.491 --- 10.0.0.3 ping statistics --- 00:13:21.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.491 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:21.491 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:21.491 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:13:21.491 00:13:21.491 --- 10.0.0.4 ping statistics --- 00:13:21.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.491 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:21.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:21.491 00:13:21.491 --- 10.0.0.1 ping statistics --- 00:13:21.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.491 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:21.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:13:21.491 00:13:21.491 --- 10.0.0.2 ping statistics --- 00:13:21.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.491 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72631 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72631 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72631 ']' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.491 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:21.491 [2024-11-15 12:48:30.127231] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:21.491 [2024-11-15 12:48:30.127320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.751 [2024-11-15 12:48:30.279150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.751 [2024-11-15 12:48:30.336491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.751 [2024-11-15 12:48:30.336572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.751 [2024-11-15 12:48:30.336594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.751 [2024-11-15 12:48:30.336645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.751 [2024-11-15 12:48:30.336663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.751 [2024-11-15 12:48:30.337109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.751 [2024-11-15 12:48:30.380819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:22.010 [2024-11-15 12:48:30.480981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:22.010 Malloc0 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.010 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:22.010 [2024-11-15 12:48:30.516661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72655 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72656 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72657 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72655 00:13:22.011 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:22.270 [2024-11-15 12:48:30.694754] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:22.270 [2024-11-15 12:48:30.704942] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:22.270 [2024-11-15 12:48:30.705537] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:23.208 Initializing NVMe Controllers 00:13:23.208 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:23.208 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:13:23.208 Initialization complete. Launching workers. 00:13:23.208 ======================================================== 00:13:23.208 Latency(us) 00:13:23.208 Device Information : IOPS MiB/s Average min max 00:13:23.208 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3859.00 15.07 258.82 121.06 600.89 00:13:23.208 ======================================================== 00:13:23.208 Total : 3859.00 15.07 258.82 121.06 600.89 00:13:23.208 00:13:23.208 Initializing NVMe Controllers 00:13:23.208 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:23.208 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:13:23.208 Initialization complete. Launching workers. 00:13:23.208 ======================================================== 00:13:23.208 Latency(us) 00:13:23.208 Device Information : IOPS MiB/s Average min max 00:13:23.208 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3845.00 15.02 259.72 163.86 739.66 00:13:23.208 ======================================================== 00:13:23.208 Total : 3845.00 15.02 259.72 163.86 739.66 00:13:23.208 00:13:23.208 Initializing NVMe Controllers 00:13:23.208 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:23.208 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:13:23.208 Initialization complete. Launching workers. 00:13:23.208 ======================================================== 00:13:23.208 Latency(us) 00:13:23.208 Device Information : IOPS MiB/s Average min max 00:13:23.208 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3847.00 15.03 259.58 163.89 617.12 00:13:23.208 ======================================================== 00:13:23.208 Total : 3847.00 15.03 259.58 163.89 617.12 00:13:23.208 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72656 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72657 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.208 rmmod nvme_tcp 00:13:23.208 rmmod nvme_fabrics 00:13:23.208 rmmod nvme_keyring 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72631 ']' 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72631 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72631 ']' 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72631 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.208 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72631 00:13:23.468 killing process with pid 72631 00:13:23.468 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.468 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.468 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72631' 00:13:23.468 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72631 00:13:23.468 12:48:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72631 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:23.468 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:13:23.727 00:13:23.727 real 0m2.855s 00:13:23.727 user 0m4.739s 00:13:23.727 sys 0m1.258s 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:23.727 ************************************ 00:13:23.727 END TEST nvmf_control_msg_list 00:13:23.727 ************************************ 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.727 12:48:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.727 ************************************ 00:13:23.727 START TEST nvmf_wait_for_buf 00:13:23.727 ************************************ 00:13:23.728 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:23.728 * Looking for test storage... 00:13:23.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.988 --rc genhtml_branch_coverage=1 00:13:23.988 --rc genhtml_function_coverage=1 00:13:23.988 --rc genhtml_legend=1 00:13:23.988 --rc geninfo_all_blocks=1 00:13:23.988 --rc geninfo_unexecuted_blocks=1 00:13:23.988 00:13:23.988 ' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.988 --rc genhtml_branch_coverage=1 00:13:23.988 --rc genhtml_function_coverage=1 00:13:23.988 --rc genhtml_legend=1 00:13:23.988 --rc geninfo_all_blocks=1 00:13:23.988 --rc geninfo_unexecuted_blocks=1 00:13:23.988 00:13:23.988 ' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.988 --rc genhtml_branch_coverage=1 00:13:23.988 --rc genhtml_function_coverage=1 00:13:23.988 --rc genhtml_legend=1 00:13:23.988 --rc geninfo_all_blocks=1 00:13:23.988 --rc geninfo_unexecuted_blocks=1 00:13:23.988 00:13:23.988 ' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.988 --rc genhtml_branch_coverage=1 00:13:23.988 --rc genhtml_function_coverage=1 00:13:23.988 --rc genhtml_legend=1 00:13:23.988 --rc geninfo_all_blocks=1 00:13:23.988 --rc geninfo_unexecuted_blocks=1 00:13:23.988 00:13:23.988 ' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.988 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.988 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:23.988 Cannot find device "nvmf_init_br" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:23.989 Cannot find device "nvmf_init_br2" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:23.989 Cannot find device "nvmf_tgt_br" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.989 Cannot find device "nvmf_tgt_br2" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:23.989 Cannot find device "nvmf_init_br" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:23.989 Cannot find device "nvmf_init_br2" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:23.989 Cannot find device "nvmf_tgt_br" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:23.989 Cannot find device "nvmf_tgt_br2" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:23.989 Cannot find device "nvmf_br" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:23.989 Cannot find device "nvmf_init_if" 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:13:23.989 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:24.247 Cannot find device "nvmf_init_if2" 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:24.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:24.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:24.247 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:24.248 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:24.248 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:24.248 00:13:24.248 --- 10.0.0.3 ping statistics --- 00:13:24.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.248 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:24.248 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:24.507 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:24.507 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:13:24.507 00:13:24.507 --- 10.0.0.4 ping statistics --- 00:13:24.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.507 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:24.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:24.507 00:13:24.507 --- 10.0.0.1 ping statistics --- 00:13:24.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.507 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:24.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:13:24.507 00:13:24.507 --- 10.0.0.2 ping statistics --- 00:13:24.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.507 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.507 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=72895 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 72895 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 72895 ']' 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.508 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:24.508 [2024-11-15 12:48:33.021748] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:24.508 [2024-11-15 12:48:33.022071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.508 [2024-11-15 12:48:33.166588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.767 [2024-11-15 12:48:33.195573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.767 [2024-11-15 12:48:33.195626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.767 [2024-11-15 12:48:33.195653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.767 [2024-11-15 12:48:33.195660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.767 [2024-11-15 12:48:33.195666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.767 [2024-11-15 12:48:33.195929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.336 12:48:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.336 [2024-11-15 12:48:33.980211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:25.336 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.336 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:25.336 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.336 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.595 Malloc0 00:13:25.595 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.596 [2024-11-15 12:48:34.022396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:25.596 [2024-11-15 12:48:34.050460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.596 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:25.596 [2024-11-15 12:48:34.229805] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:26.974 Initializing NVMe Controllers 00:13:26.974 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:26.975 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:13:26.975 Initialization complete. Launching workers. 00:13:26.975 ======================================================== 00:13:26.975 Latency(us) 00:13:26.975 Device Information : IOPS MiB/s Average min max 00:13:26.975 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 513.42 64.18 7791.01 3993.04 9206.02 00:13:26.975 ======================================================== 00:13:26.975 Total : 513.42 64.18 7791.01 3993.04 9206.02 00:13:26.975 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4902 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4902 -eq 0 ]] 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.975 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.975 rmmod nvme_tcp 00:13:26.975 rmmod nvme_fabrics 00:13:27.234 rmmod nvme_keyring 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 72895 ']' 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 72895 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 72895 ']' 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 72895 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.234 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72895 00:13:27.234 killing process with pid 72895 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72895' 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 72895 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 72895 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:27.235 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.494 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:27.494 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:27.494 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:27.494 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:27.494 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:27.494 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:27.494 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:13:27.494 00:13:27.494 real 0m3.768s 00:13:27.494 user 0m3.214s 00:13:27.494 sys 0m0.779s 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.494 ************************************ 00:13:27.494 END TEST nvmf_wait_for_buf 00:13:27.494 ************************************ 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.494 ************************************ 00:13:27.494 START TEST nvmf_nsid 00:13:27.494 ************************************ 00:13:27.494 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:27.755 * Looking for test storage... 00:13:27.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.755 --rc genhtml_branch_coverage=1 00:13:27.755 --rc genhtml_function_coverage=1 00:13:27.755 --rc genhtml_legend=1 00:13:27.755 --rc geninfo_all_blocks=1 00:13:27.755 --rc geninfo_unexecuted_blocks=1 00:13:27.755 00:13:27.755 ' 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.755 --rc genhtml_branch_coverage=1 00:13:27.755 --rc genhtml_function_coverage=1 00:13:27.755 --rc genhtml_legend=1 00:13:27.755 --rc geninfo_all_blocks=1 00:13:27.755 --rc geninfo_unexecuted_blocks=1 00:13:27.755 00:13:27.755 ' 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.755 --rc genhtml_branch_coverage=1 00:13:27.755 --rc genhtml_function_coverage=1 00:13:27.755 --rc genhtml_legend=1 00:13:27.755 --rc geninfo_all_blocks=1 00:13:27.755 --rc geninfo_unexecuted_blocks=1 00:13:27.755 00:13:27.755 ' 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.755 --rc genhtml_branch_coverage=1 00:13:27.755 --rc genhtml_function_coverage=1 00:13:27.755 --rc genhtml_legend=1 00:13:27.755 --rc geninfo_all_blocks=1 00:13:27.755 --rc geninfo_unexecuted_blocks=1 00:13:27.755 00:13:27.755 ' 00:13:27.755 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.756 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.756 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:28.015 Cannot find device "nvmf_init_br" 00:13:28.015 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:13:28.015 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:28.015 Cannot find device "nvmf_init_br2" 00:13:28.015 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:13:28.015 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:28.015 Cannot find device "nvmf_tgt_br" 00:13:28.015 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:13:28.015 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.016 Cannot find device "nvmf_tgt_br2" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:28.016 Cannot find device "nvmf_init_br" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:28.016 Cannot find device "nvmf_init_br2" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:28.016 Cannot find device "nvmf_tgt_br" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:28.016 Cannot find device "nvmf_tgt_br2" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:28.016 Cannot find device "nvmf_br" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:28.016 Cannot find device "nvmf_init_if" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:28.016 Cannot find device "nvmf_init_if2" 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:28.016 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:28.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:28.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:28.275 00:13:28.275 --- 10.0.0.3 ping statistics --- 00:13:28.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.275 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:28.275 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:28.275 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:28.276 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:13:28.276 00:13:28.276 --- 10.0.0.4 ping statistics --- 00:13:28.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.276 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:28.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:28.276 00:13:28.276 --- 10.0.0.1 ping statistics --- 00:13:28.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.276 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:28.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:13:28.276 00:13:28.276 --- 10.0.0.2 ping statistics --- 00:13:28.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.276 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73165 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73165 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73165 ']' 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.276 12:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:28.276 [2024-11-15 12:48:36.873022] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:28.276 [2024-11-15 12:48:36.873100] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.536 [2024-11-15 12:48:37.018878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.536 [2024-11-15 12:48:37.045773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.536 [2024-11-15 12:48:37.045834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.536 [2024-11-15 12:48:37.045844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.536 [2024-11-15 12:48:37.045851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.536 [2024-11-15 12:48:37.045857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.536 [2024-11-15 12:48:37.046164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.536 [2024-11-15 12:48:37.072185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73188 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ee3d3753-5156-480b-a99b-a53f93e942a2 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=f582e314-426d-4262-ba15-3f3f41fe0e38 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7b499447-8dbd-4128-80c0-83649acdcbee 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.536 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:28.536 null0 00:13:28.536 null1 00:13:28.796 null2 00:13:28.796 [2024-11-15 12:48:37.208523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.796 [2024-11-15 12:48:37.224186] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:28.796 [2024-11-15 12:48:37.224278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73188 ] 00:13:28.796 [2024-11-15 12:48:37.232566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73188 /var/tmp/tgt2.sock 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73188 ']' 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.796 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 [2024-11-15 12:48:37.377698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.796 [2024-11-15 12:48:37.415708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.796 [2024-11-15 12:48:37.459508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.056 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.056 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:29.056 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:13:29.315 [2024-11-15 12:48:37.933828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.315 [2024-11-15 12:48:37.949916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:13:29.316 nvme0n1 nvme0n2 00:13:29.316 nvme1n1 00:13:29.574 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:13:29.574 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:13:29.574 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:13:29.574 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ee3d3753-5156-480b-a99b-a53f93e942a2 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:13:30.512 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ee3d37535156480ba99ba53f93e942a2 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EE3D37535156480BA99BA53F93E942A2 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EE3D37535156480BA99BA53F93E942A2 == \E\E\3\D\3\7\5\3\5\1\5\6\4\8\0\B\A\9\9\B\A\5\3\F\9\3\E\9\4\2\A\2 ]] 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid f582e314-426d-4262-ba15-3f3f41fe0e38 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f582e314426d4262ba153f3f41fe0e38 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F582E314426D4262BA153F3F41FE0E38 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ F582E314426D4262BA153F3F41FE0E38 == \F\5\8\2\E\3\1\4\4\2\6\D\4\2\6\2\B\A\1\5\3\F\3\F\4\1\F\E\0\E\3\8 ]] 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7b499447-8dbd-4128-80c0-83649acdcbee 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7b4994478dbd412880c083649acdcbee 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7B4994478DBD412880C083649ACDCBEE 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7B4994478DBD412880C083649ACDCBEE == \7\B\4\9\9\4\4\7\8\D\B\D\4\1\2\8\8\0\C\0\8\3\6\4\9\A\C\D\C\B\E\E ]] 00:13:30.772 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73188 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73188 ']' 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73188 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73188 00:13:31.075 killing process with pid 73188 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73188' 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73188 00:13:31.075 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73188 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:31.347 rmmod nvme_tcp 00:13:31.347 rmmod nvme_fabrics 00:13:31.347 rmmod nvme_keyring 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73165 ']' 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73165 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73165 ']' 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73165 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73165 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.347 killing process with pid 73165 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73165' 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73165 00:13:31.347 12:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73165 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.616 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:13:31.875 00:13:31.875 real 0m4.197s 00:13:31.875 user 0m6.227s 00:13:31.875 sys 0m1.488s 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:31.875 ************************************ 00:13:31.875 END TEST nvmf_nsid 00:13:31.875 ************************************ 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:31.875 00:13:31.875 real 4m44.243s 00:13:31.875 user 9m54.919s 00:13:31.875 sys 1m2.521s 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.875 ************************************ 00:13:31.875 END TEST nvmf_target_extra 00:13:31.875 12:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.875 ************************************ 00:13:31.875 12:48:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:31.875 12:48:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.875 12:48:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.875 12:48:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:31.875 ************************************ 00:13:31.875 START TEST nvmf_host 00:13:31.875 ************************************ 00:13:31.875 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:31.875 * Looking for test storage... 00:13:31.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:31.875 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:31.875 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:13:31.875 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.135 --rc genhtml_branch_coverage=1 00:13:32.135 --rc genhtml_function_coverage=1 00:13:32.135 --rc genhtml_legend=1 00:13:32.135 --rc geninfo_all_blocks=1 00:13:32.135 --rc geninfo_unexecuted_blocks=1 00:13:32.135 00:13:32.135 ' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.135 --rc genhtml_branch_coverage=1 00:13:32.135 --rc genhtml_function_coverage=1 00:13:32.135 --rc genhtml_legend=1 00:13:32.135 --rc geninfo_all_blocks=1 00:13:32.135 --rc geninfo_unexecuted_blocks=1 00:13:32.135 00:13:32.135 ' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.135 --rc genhtml_branch_coverage=1 00:13:32.135 --rc genhtml_function_coverage=1 00:13:32.135 --rc genhtml_legend=1 00:13:32.135 --rc geninfo_all_blocks=1 00:13:32.135 --rc geninfo_unexecuted_blocks=1 00:13:32.135 00:13:32.135 ' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.135 --rc genhtml_branch_coverage=1 00:13:32.135 --rc genhtml_function_coverage=1 00:13:32.135 --rc genhtml_legend=1 00:13:32.135 --rc geninfo_all_blocks=1 00:13:32.135 --rc geninfo_unexecuted_blocks=1 00:13:32.135 00:13:32.135 ' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.135 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.135 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:32.136 ************************************ 00:13:32.136 START TEST nvmf_identify 00:13:32.136 ************************************ 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:32.136 * Looking for test storage... 00:13:32.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:13:32.136 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.396 --rc genhtml_branch_coverage=1 00:13:32.396 --rc genhtml_function_coverage=1 00:13:32.396 --rc genhtml_legend=1 00:13:32.396 --rc geninfo_all_blocks=1 00:13:32.396 --rc geninfo_unexecuted_blocks=1 00:13:32.396 00:13:32.396 ' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.396 --rc genhtml_branch_coverage=1 00:13:32.396 --rc genhtml_function_coverage=1 00:13:32.396 --rc genhtml_legend=1 00:13:32.396 --rc geninfo_all_blocks=1 00:13:32.396 --rc geninfo_unexecuted_blocks=1 00:13:32.396 00:13:32.396 ' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.396 --rc genhtml_branch_coverage=1 00:13:32.396 --rc genhtml_function_coverage=1 00:13:32.396 --rc genhtml_legend=1 00:13:32.396 --rc geninfo_all_blocks=1 00:13:32.396 --rc geninfo_unexecuted_blocks=1 00:13:32.396 00:13:32.396 ' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.396 --rc genhtml_branch_coverage=1 00:13:32.396 --rc genhtml_function_coverage=1 00:13:32.396 --rc genhtml_legend=1 00:13:32.396 --rc geninfo_all_blocks=1 00:13:32.396 --rc geninfo_unexecuted_blocks=1 00:13:32.396 00:13:32.396 ' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.396 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.396 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:32.397 Cannot find device "nvmf_init_br" 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:32.397 Cannot find device "nvmf_init_br2" 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:32.397 Cannot find device "nvmf_tgt_br" 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.397 Cannot find device "nvmf_tgt_br2" 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:32.397 Cannot find device "nvmf_init_br" 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:32.397 Cannot find device "nvmf_init_br2" 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:32.397 Cannot find device "nvmf_tgt_br" 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:13:32.397 12:48:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:32.397 Cannot find device "nvmf_tgt_br2" 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:32.397 Cannot find device "nvmf_br" 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:32.397 Cannot find device "nvmf_init_if" 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:32.397 Cannot find device "nvmf_init_if2" 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.397 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:32.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:13:32.657 00:13:32.657 --- 10.0.0.3 ping statistics --- 00:13:32.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.657 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:32.657 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:32.657 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:13:32.657 00:13:32.657 --- 10.0.0.4 ping statistics --- 00:13:32.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.657 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:32.657 00:13:32.657 --- 10.0.0.1 ping statistics --- 00:13:32.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.657 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:32.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:13:32.657 00:13:32.657 --- 10.0.0.2 ping statistics --- 00:13:32.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.657 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73536 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73536 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73536 ']' 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.657 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:32.917 [2024-11-15 12:48:41.359271] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:32.917 [2024-11-15 12:48:41.359357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.917 [2024-11-15 12:48:41.509727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.917 [2024-11-15 12:48:41.550900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.917 [2024-11-15 12:48:41.550962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.917 [2024-11-15 12:48:41.550976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.917 [2024-11-15 12:48:41.550986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.917 [2024-11-15 12:48:41.550995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.917 [2024-11-15 12:48:41.555643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.917 [2024-11-15 12:48:41.555841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.917 [2024-11-15 12:48:41.555956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.917 [2024-11-15 12:48:41.555966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.177 [2024-11-15 12:48:41.591434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 [2024-11-15 12:48:41.651555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 Malloc0 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 [2024-11-15 12:48:41.756276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.177 [ 00:13:33.177 { 00:13:33.177 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.177 "subtype": "Discovery", 00:13:33.177 "listen_addresses": [ 00:13:33.177 { 00:13:33.177 "trtype": "TCP", 00:13:33.177 "adrfam": "IPv4", 00:13:33.177 "traddr": "10.0.0.3", 00:13:33.177 "trsvcid": "4420" 00:13:33.177 } 00:13:33.177 ], 00:13:33.177 "allow_any_host": true, 00:13:33.177 "hosts": [] 00:13:33.177 }, 00:13:33.177 { 00:13:33.177 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.177 "subtype": "NVMe", 00:13:33.177 "listen_addresses": [ 00:13:33.177 { 00:13:33.177 "trtype": "TCP", 00:13:33.177 "adrfam": "IPv4", 00:13:33.177 "traddr": "10.0.0.3", 00:13:33.177 "trsvcid": "4420" 00:13:33.177 } 00:13:33.177 ], 00:13:33.177 "allow_any_host": true, 00:13:33.177 "hosts": [], 00:13:33.177 "serial_number": "SPDK00000000000001", 00:13:33.177 "model_number": "SPDK bdev Controller", 00:13:33.177 "max_namespaces": 32, 00:13:33.177 "min_cntlid": 1, 00:13:33.177 "max_cntlid": 65519, 00:13:33.177 "namespaces": [ 00:13:33.177 { 00:13:33.177 "nsid": 1, 00:13:33.177 "bdev_name": "Malloc0", 00:13:33.177 "name": "Malloc0", 00:13:33.177 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:33.177 "eui64": "ABCDEF0123456789", 00:13:33.177 "uuid": "0fd96e6e-bb82-4281-b49f-f41f69883985" 00:13:33.177 } 00:13:33.177 ] 00:13:33.177 } 00:13:33.177 ] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.177 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:33.177 [2024-11-15 12:48:41.803110] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:33.177 [2024-11-15 12:48:41.803168] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73562 ] 00:13:33.439 [2024-11-15 12:48:41.952392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:13:33.439 [2024-11-15 12:48:41.952457] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:33.439 [2024-11-15 12:48:41.952464] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:33.439 [2024-11-15 12:48:41.952474] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:33.439 [2024-11-15 12:48:41.952482] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:33.439 [2024-11-15 12:48:41.952795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:13:33.439 [2024-11-15 12:48:41.952862] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x123c750 0 00:13:33.439 [2024-11-15 12:48:41.958692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:33.439 [2024-11-15 12:48:41.958722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:33.439 [2024-11-15 12:48:41.958728] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:33.439 [2024-11-15 12:48:41.958731] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:33.439 [2024-11-15 12:48:41.958761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.958768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.958772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.439 [2024-11-15 12:48:41.958785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:33.439 [2024-11-15 12:48:41.958815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.439 [2024-11-15 12:48:41.966694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.439 [2024-11-15 12:48:41.966714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.439 [2024-11-15 12:48:41.966735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.966740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.439 [2024-11-15 12:48:41.966751] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:33.439 [2024-11-15 12:48:41.966759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:13:33.439 [2024-11-15 12:48:41.966765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:13:33.439 [2024-11-15 12:48:41.966780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.966786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.966790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.439 [2024-11-15 12:48:41.966799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.439 [2024-11-15 12:48:41.966826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.439 [2024-11-15 12:48:41.966884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.439 [2024-11-15 12:48:41.966892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.439 [2024-11-15 12:48:41.966895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.966899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.439 [2024-11-15 12:48:41.966905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:13:33.439 [2024-11-15 12:48:41.966913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:13:33.439 [2024-11-15 12:48:41.966920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.966925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.966944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.439 [2024-11-15 12:48:41.966951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.439 [2024-11-15 12:48:41.966970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.439 [2024-11-15 12:48:41.967017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.439 [2024-11-15 12:48:41.967024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.439 [2024-11-15 12:48:41.967027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.439 [2024-11-15 12:48:41.967031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.439 [2024-11-15 12:48:41.967037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:13:33.439 [2024-11-15 12:48:41.967045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:33.440 [2024-11-15 12:48:41.967052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.967067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.440 [2024-11-15 12:48:41.967084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.440 [2024-11-15 12:48:41.967128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.440 [2024-11-15 12:48:41.967134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.440 [2024-11-15 12:48:41.967138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.440 [2024-11-15 12:48:41.967148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:33.440 [2024-11-15 12:48:41.967158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.967180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.440 [2024-11-15 12:48:41.967196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.440 [2024-11-15 12:48:41.967245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.440 [2024-11-15 12:48:41.967251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.440 [2024-11-15 12:48:41.967255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.440 [2024-11-15 12:48:41.967264] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:33.440 [2024-11-15 12:48:41.967270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:33.440 [2024-11-15 12:48:41.967277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:33.440 [2024-11-15 12:48:41.967388] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:13:33.440 [2024-11-15 12:48:41.967394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:33.440 [2024-11-15 12:48:41.967403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.967419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.440 [2024-11-15 12:48:41.967437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.440 [2024-11-15 12:48:41.967481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.440 [2024-11-15 12:48:41.967487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.440 [2024-11-15 12:48:41.967491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.440 [2024-11-15 12:48:41.967500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:33.440 [2024-11-15 12:48:41.967510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.967525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.440 [2024-11-15 12:48:41.967542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.440 [2024-11-15 12:48:41.967582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.440 [2024-11-15 12:48:41.967589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.440 [2024-11-15 12:48:41.967592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.440 [2024-11-15 12:48:41.967601] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:33.440 [2024-11-15 12:48:41.967606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:33.440 [2024-11-15 12:48:41.967627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:13:33.440 [2024-11-15 12:48:41.967644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:33.440 [2024-11-15 12:48:41.967655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.967668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.440 [2024-11-15 12:48:41.967688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.440 [2024-11-15 12:48:41.967764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.440 [2024-11-15 12:48:41.967771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.440 [2024-11-15 12:48:41.967775] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967779] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x123c750): datao=0, datal=4096, cccid=0 00:13:33.440 [2024-11-15 12:48:41.967784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a0740) on tqpair(0x123c750): expected_datao=0, payload_size=4096 00:13:33.440 [2024-11-15 12:48:41.967789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967797] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967801] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.440 [2024-11-15 12:48:41.967816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.440 [2024-11-15 12:48:41.967819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.440 [2024-11-15 12:48:41.967832] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:13:33.440 [2024-11-15 12:48:41.967837] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:13:33.440 [2024-11-15 12:48:41.967842] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:13:33.440 [2024-11-15 12:48:41.967847] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:13:33.440 [2024-11-15 12:48:41.967852] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:13:33.440 [2024-11-15 12:48:41.967857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:13:33.440 [2024-11-15 12:48:41.967870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:33.440 [2024-11-15 12:48:41.967878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.967894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:33.440 [2024-11-15 12:48:41.967914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.440 [2024-11-15 12:48:41.967972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.440 [2024-11-15 12:48:41.967978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.440 [2024-11-15 12:48:41.967982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.440 [2024-11-15 12:48:41.967994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.967998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.968002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.968008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.440 [2024-11-15 12:48:41.968015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.968018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.968022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.968028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.440 [2024-11-15 12:48:41.968034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.968038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.968042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.968048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.440 [2024-11-15 12:48:41.968053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.968057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.440 [2024-11-15 12:48:41.968061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.440 [2024-11-15 12:48:41.968067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.440 [2024-11-15 12:48:41.968072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:33.440 [2024-11-15 12:48:41.968084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:33.441 [2024-11-15 12:48:41.968092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x123c750) 00:13:33.441 [2024-11-15 12:48:41.968103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.441 [2024-11-15 12:48:41.968123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0740, cid 0, qid 0 00:13:33.441 [2024-11-15 12:48:41.968129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a08c0, cid 1, qid 0 00:13:33.441 [2024-11-15 12:48:41.968134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0a40, cid 2, qid 0 00:13:33.441 [2024-11-15 12:48:41.968139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.441 [2024-11-15 12:48:41.968144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0d40, cid 4, qid 0 00:13:33.441 [2024-11-15 12:48:41.968227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.441 [2024-11-15 12:48:41.968234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.441 [2024-11-15 12:48:41.968237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0d40) on tqpair=0x123c750 00:13:33.441 [2024-11-15 12:48:41.968247] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:13:33.441 [2024-11-15 12:48:41.968252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:13:33.441 [2024-11-15 12:48:41.968264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x123c750) 00:13:33.441 [2024-11-15 12:48:41.968275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.441 [2024-11-15 12:48:41.968293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0d40, cid 4, qid 0 00:13:33.441 [2024-11-15 12:48:41.968347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.441 [2024-11-15 12:48:41.968353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.441 [2024-11-15 12:48:41.968357] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968361] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x123c750): datao=0, datal=4096, cccid=4 00:13:33.441 [2024-11-15 12:48:41.968366] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a0d40) on tqpair(0x123c750): expected_datao=0, payload_size=4096 00:13:33.441 [2024-11-15 12:48:41.968371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968378] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968382] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.441 [2024-11-15 12:48:41.968396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.441 [2024-11-15 12:48:41.968399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0d40) on tqpair=0x123c750 00:13:33.441 [2024-11-15 12:48:41.968418] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:13:33.441 [2024-11-15 12:48:41.968447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x123c750) 00:13:33.441 [2024-11-15 12:48:41.968460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.441 [2024-11-15 12:48:41.968468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x123c750) 00:13:33.441 [2024-11-15 12:48:41.968482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.441 [2024-11-15 12:48:41.968505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0d40, cid 4, qid 0 00:13:33.441 [2024-11-15 12:48:41.968512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0ec0, cid 5, qid 0 00:13:33.441 [2024-11-15 12:48:41.968617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.441 [2024-11-15 12:48:41.968625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.441 [2024-11-15 12:48:41.968630] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968633] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x123c750): datao=0, datal=1024, cccid=4 00:13:33.441 [2024-11-15 12:48:41.968638] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a0d40) on tqpair(0x123c750): expected_datao=0, payload_size=1024 00:13:33.441 [2024-11-15 12:48:41.968643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968649] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968654] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.441 [2024-11-15 12:48:41.968665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.441 [2024-11-15 12:48:41.968669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0ec0) on tqpair=0x123c750 00:13:33.441 [2024-11-15 12:48:41.968692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.441 [2024-11-15 12:48:41.968699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.441 [2024-11-15 12:48:41.968703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0d40) on tqpair=0x123c750 00:13:33.441 [2024-11-15 12:48:41.968737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x123c750) 00:13:33.441 [2024-11-15 12:48:41.968756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.441 [2024-11-15 12:48:41.968786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0d40, cid 4, qid 0 00:13:33.441 [2024-11-15 12:48:41.968866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.441 [2024-11-15 12:48:41.968873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.441 [2024-11-15 12:48:41.968877] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968881] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x123c750): datao=0, datal=3072, cccid=4 00:13:33.441 [2024-11-15 12:48:41.968885] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a0d40) on tqpair(0x123c750): expected_datao=0, payload_size=3072 00:13:33.441 [2024-11-15 12:48:41.968890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968897] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968901] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.441 [2024-11-15 12:48:41.968915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.441 [2024-11-15 12:48:41.968918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0d40) on tqpair=0x123c750 00:13:33.441 [2024-11-15 12:48:41.968933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.968938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x123c750) 00:13:33.441 [2024-11-15 12:48:41.968945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.441 [2024-11-15 12:48:41.968968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0d40, cid 4, qid 0 00:13:33.441 [2024-11-15 12:48:41.969032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.441 [2024-11-15 12:48:41.969038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.441 [2024-11-15 12:48:41.969042] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.969046] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x123c750): datao=0, datal=8, cccid=4 00:13:33.441 [2024-11-15 12:48:41.969050] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a0d40) on tqpair(0x123c750): expected_datao=0, payload_size=8 00:13:33.441 [2024-11-15 12:48:41.969055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.969062] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.969065] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.969080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.441 [2024-11-15 12:48:41.969087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.441 [2024-11-15 12:48:41.969090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.441 [2024-11-15 12:48:41.969094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0d40) on tqpair=0x123c750 00:13:33.441 ===================================================== 00:13:33.441 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:33.441 ===================================================== 00:13:33.441 Controller Capabilities/Features 00:13:33.441 ================================ 00:13:33.441 Vendor ID: 0000 00:13:33.441 Subsystem Vendor ID: 0000 00:13:33.441 Serial Number: .................... 00:13:33.441 Model Number: ........................................ 00:13:33.441 Firmware Version: 25.01 00:13:33.441 Recommended Arb Burst: 0 00:13:33.441 IEEE OUI Identifier: 00 00 00 00:13:33.441 Multi-path I/O 00:13:33.441 May have multiple subsystem ports: No 00:13:33.441 May have multiple controllers: No 00:13:33.441 Associated with SR-IOV VF: No 00:13:33.441 Max Data Transfer Size: 131072 00:13:33.441 Max Number of Namespaces: 0 00:13:33.441 Max Number of I/O Queues: 1024 00:13:33.441 NVMe Specification Version (VS): 1.3 00:13:33.441 NVMe Specification Version (Identify): 1.3 00:13:33.441 Maximum Queue Entries: 128 00:13:33.441 Contiguous Queues Required: Yes 00:13:33.441 Arbitration Mechanisms Supported 00:13:33.441 Weighted Round Robin: Not Supported 00:13:33.441 Vendor Specific: Not Supported 00:13:33.441 Reset Timeout: 15000 ms 00:13:33.441 Doorbell Stride: 4 bytes 00:13:33.441 NVM Subsystem Reset: Not Supported 00:13:33.442 Command Sets Supported 00:13:33.442 NVM Command Set: Supported 00:13:33.442 Boot Partition: Not Supported 00:13:33.442 Memory Page Size Minimum: 4096 bytes 00:13:33.442 Memory Page Size Maximum: 4096 bytes 00:13:33.442 Persistent Memory Region: Not Supported 00:13:33.442 Optional Asynchronous Events Supported 00:13:33.442 Namespace Attribute Notices: Not Supported 00:13:33.442 Firmware Activation Notices: Not Supported 00:13:33.442 ANA Change Notices: Not Supported 00:13:33.442 PLE Aggregate Log Change Notices: Not Supported 00:13:33.442 LBA Status Info Alert Notices: Not Supported 00:13:33.442 EGE Aggregate Log Change Notices: Not Supported 00:13:33.442 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.442 Zone Descriptor Change Notices: Not Supported 00:13:33.442 Discovery Log Change Notices: Supported 00:13:33.442 Controller Attributes 00:13:33.442 128-bit Host Identifier: Not Supported 00:13:33.442 Non-Operational Permissive Mode: Not Supported 00:13:33.442 NVM Sets: Not Supported 00:13:33.442 Read Recovery Levels: Not Supported 00:13:33.442 Endurance Groups: Not Supported 00:13:33.442 Predictable Latency Mode: Not Supported 00:13:33.442 Traffic Based Keep ALive: Not Supported 00:13:33.442 Namespace Granularity: Not Supported 00:13:33.442 SQ Associations: Not Supported 00:13:33.442 UUID List: Not Supported 00:13:33.442 Multi-Domain Subsystem: Not Supported 00:13:33.442 Fixed Capacity Management: Not Supported 00:13:33.442 Variable Capacity Management: Not Supported 00:13:33.442 Delete Endurance Group: Not Supported 00:13:33.442 Delete NVM Set: Not Supported 00:13:33.442 Extended LBA Formats Supported: Not Supported 00:13:33.442 Flexible Data Placement Supported: Not Supported 00:13:33.442 00:13:33.442 Controller Memory Buffer Support 00:13:33.442 ================================ 00:13:33.442 Supported: No 00:13:33.442 00:13:33.442 Persistent Memory Region Support 00:13:33.442 ================================ 00:13:33.442 Supported: No 00:13:33.442 00:13:33.442 Admin Command Set Attributes 00:13:33.442 ============================ 00:13:33.442 Security Send/Receive: Not Supported 00:13:33.442 Format NVM: Not Supported 00:13:33.442 Firmware Activate/Download: Not Supported 00:13:33.442 Namespace Management: Not Supported 00:13:33.442 Device Self-Test: Not Supported 00:13:33.442 Directives: Not Supported 00:13:33.442 NVMe-MI: Not Supported 00:13:33.442 Virtualization Management: Not Supported 00:13:33.442 Doorbell Buffer Config: Not Supported 00:13:33.442 Get LBA Status Capability: Not Supported 00:13:33.442 Command & Feature Lockdown Capability: Not Supported 00:13:33.442 Abort Command Limit: 1 00:13:33.442 Async Event Request Limit: 4 00:13:33.442 Number of Firmware Slots: N/A 00:13:33.442 Firmware Slot 1 Read-Only: N/A 00:13:33.442 Firmware Activation Without Reset: N/A 00:13:33.442 Multiple Update Detection Support: N/A 00:13:33.442 Firmware Update Granularity: No Information Provided 00:13:33.442 Per-Namespace SMART Log: No 00:13:33.442 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.442 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:33.442 Command Effects Log Page: Not Supported 00:13:33.442 Get Log Page Extended Data: Supported 00:13:33.442 Telemetry Log Pages: Not Supported 00:13:33.442 Persistent Event Log Pages: Not Supported 00:13:33.442 Supported Log Pages Log Page: May Support 00:13:33.442 Commands Supported & Effects Log Page: Not Supported 00:13:33.442 Feature Identifiers & Effects Log Page:May Support 00:13:33.442 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.442 Data Area 4 for Telemetry Log: Not Supported 00:13:33.442 Error Log Page Entries Supported: 128 00:13:33.442 Keep Alive: Not Supported 00:13:33.442 00:13:33.442 NVM Command Set Attributes 00:13:33.442 ========================== 00:13:33.442 Submission Queue Entry Size 00:13:33.442 Max: 1 00:13:33.442 Min: 1 00:13:33.442 Completion Queue Entry Size 00:13:33.442 Max: 1 00:13:33.442 Min: 1 00:13:33.442 Number of Namespaces: 0 00:13:33.442 Compare Command: Not Supported 00:13:33.442 Write Uncorrectable Command: Not Supported 00:13:33.442 Dataset Management Command: Not Supported 00:13:33.442 Write Zeroes Command: Not Supported 00:13:33.442 Set Features Save Field: Not Supported 00:13:33.442 Reservations: Not Supported 00:13:33.442 Timestamp: Not Supported 00:13:33.442 Copy: Not Supported 00:13:33.442 Volatile Write Cache: Not Present 00:13:33.442 Atomic Write Unit (Normal): 1 00:13:33.442 Atomic Write Unit (PFail): 1 00:13:33.442 Atomic Compare & Write Unit: 1 00:13:33.442 Fused Compare & Write: Supported 00:13:33.442 Scatter-Gather List 00:13:33.442 SGL Command Set: Supported 00:13:33.442 SGL Keyed: Supported 00:13:33.442 SGL Bit Bucket Descriptor: Not Supported 00:13:33.442 SGL Metadata Pointer: Not Supported 00:13:33.442 Oversized SGL: Not Supported 00:13:33.442 SGL Metadata Address: Not Supported 00:13:33.442 SGL Offset: Supported 00:13:33.442 Transport SGL Data Block: Not Supported 00:13:33.442 Replay Protected Memory Block: Not Supported 00:13:33.442 00:13:33.442 Firmware Slot Information 00:13:33.442 ========================= 00:13:33.442 Active slot: 0 00:13:33.442 00:13:33.442 00:13:33.442 Error Log 00:13:33.442 ========= 00:13:33.442 00:13:33.442 Active Namespaces 00:13:33.442 ================= 00:13:33.442 Discovery Log Page 00:13:33.442 ================== 00:13:33.442 Generation Counter: 2 00:13:33.442 Number of Records: 2 00:13:33.442 Record Format: 0 00:13:33.442 00:13:33.442 Discovery Log Entry 0 00:13:33.442 ---------------------- 00:13:33.442 Transport Type: 3 (TCP) 00:13:33.442 Address Family: 1 (IPv4) 00:13:33.442 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:33.442 Entry Flags: 00:13:33.442 Duplicate Returned Information: 1 00:13:33.442 Explicit Persistent Connection Support for Discovery: 1 00:13:33.442 Transport Requirements: 00:13:33.442 Secure Channel: Not Required 00:13:33.442 Port ID: 0 (0x0000) 00:13:33.442 Controller ID: 65535 (0xffff) 00:13:33.442 Admin Max SQ Size: 128 00:13:33.442 Transport Service Identifier: 4420 00:13:33.442 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:33.442 Transport Address: 10.0.0.3 00:13:33.442 Discovery Log Entry 1 00:13:33.442 ---------------------- 00:13:33.442 Transport Type: 3 (TCP) 00:13:33.442 Address Family: 1 (IPv4) 00:13:33.442 Subsystem Type: 2 (NVM Subsystem) 00:13:33.442 Entry Flags: 00:13:33.442 Duplicate Returned Information: 0 00:13:33.442 Explicit Persistent Connection Support for Discovery: 0 00:13:33.442 Transport Requirements: 00:13:33.442 Secure Channel: Not Required 00:13:33.442 Port ID: 0 (0x0000) 00:13:33.442 Controller ID: 65535 (0xffff) 00:13:33.442 Admin Max SQ Size: 128 00:13:33.442 Transport Service Identifier: 4420 00:13:33.442 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:33.442 Transport Address: 10.0.0.3 [2024-11-15 12:48:41.969183] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:13:33.442 [2024-11-15 12:48:41.969197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0740) on tqpair=0x123c750 00:13:33.442 [2024-11-15 12:48:41.969204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.442 [2024-11-15 12:48:41.969210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a08c0) on tqpair=0x123c750 00:13:33.442 [2024-11-15 12:48:41.969214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.442 [2024-11-15 12:48:41.969219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0a40) on tqpair=0x123c750 00:13:33.442 [2024-11-15 12:48:41.969224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.442 [2024-11-15 12:48:41.969229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.442 [2024-11-15 12:48:41.969234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.442 [2024-11-15 12:48:41.969243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.442 [2024-11-15 12:48:41.969247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.442 [2024-11-15 12:48:41.969251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.442 [2024-11-15 12:48:41.969259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.442 [2024-11-15 12:48:41.969282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.442 [2024-11-15 12:48:41.969325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.442 [2024-11-15 12:48:41.969331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.442 [2024-11-15 12:48:41.969335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.442 [2024-11-15 12:48:41.969339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.442 [2024-11-15 12:48:41.969347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.969362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.969383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.969448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.969454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.969458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.969467] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:13:33.443 [2024-11-15 12:48:41.969471] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:13:33.443 [2024-11-15 12:48:41.969481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.969496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.969512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.969561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.969567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.969571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.969585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.969615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.969636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.969692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.969700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.969703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.969718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.969734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.969752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.969797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.969804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.969808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.969822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.969837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.969853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.969895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.969901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.969905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.969919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.969927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.969934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.969951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.969993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.970000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.970004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.970018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.970033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.970049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.970089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.970095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.970099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.970113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.970128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.970144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.970189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.970196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.970199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.970213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.970229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.970245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.970289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.970301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.970305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.970320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.970336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.970353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.970400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.970407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.970410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.970424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.970440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.970456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.970500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.970507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.443 [2024-11-15 12:48:41.970510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.443 [2024-11-15 12:48:41.970524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.443 [2024-11-15 12:48:41.970533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.443 [2024-11-15 12:48:41.970540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.443 [2024-11-15 12:48:41.970555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.443 [2024-11-15 12:48:41.972673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.443 [2024-11-15 12:48:41.972700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.444 [2024-11-15 12:48:41.972721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.444 [2024-11-15 12:48:41.972726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.444 [2024-11-15 12:48:41.972742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.444 [2024-11-15 12:48:41.972748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.444 [2024-11-15 12:48:41.972752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x123c750) 00:13:33.444 [2024-11-15 12:48:41.972761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.444 [2024-11-15 12:48:41.972787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a0bc0, cid 3, qid 0 00:13:33.444 [2024-11-15 12:48:41.972839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.444 [2024-11-15 12:48:41.972846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.444 [2024-11-15 12:48:41.972850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.444 [2024-11-15 12:48:41.972854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a0bc0) on tqpair=0x123c750 00:13:33.444 [2024-11-15 12:48:41.972863] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 3 milliseconds 00:13:33.444 00:13:33.444 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:33.444 [2024-11-15 12:48:42.010281] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:33.444 [2024-11-15 12:48:42.010336] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73571 ] 00:13:33.708 [2024-11-15 12:48:42.160139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:13:33.708 [2024-11-15 12:48:42.160203] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:33.708 [2024-11-15 12:48:42.160209] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:33.708 [2024-11-15 12:48:42.160219] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:33.708 [2024-11-15 12:48:42.160226] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:33.708 [2024-11-15 12:48:42.160471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:13:33.708 [2024-11-15 12:48:42.160537] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa76750 0 00:13:33.708 [2024-11-15 12:48:42.166738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:33.708 [2024-11-15 12:48:42.166761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:33.708 [2024-11-15 12:48:42.166767] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:33.708 [2024-11-15 12:48:42.166770] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:33.708 [2024-11-15 12:48:42.166795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.166802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.166806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.708 [2024-11-15 12:48:42.166816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:33.708 [2024-11-15 12:48:42.166847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.708 [2024-11-15 12:48:42.174679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.708 [2024-11-15 12:48:42.174700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.708 [2024-11-15 12:48:42.174705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.174710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.708 [2024-11-15 12:48:42.174721] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:33.708 [2024-11-15 12:48:42.174729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:13:33.708 [2024-11-15 12:48:42.174735] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:13:33.708 [2024-11-15 12:48:42.174749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.174754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.174757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.708 [2024-11-15 12:48:42.174766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.708 [2024-11-15 12:48:42.174793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.708 [2024-11-15 12:48:42.174847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.708 [2024-11-15 12:48:42.174870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.708 [2024-11-15 12:48:42.174874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.174878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.708 [2024-11-15 12:48:42.174884] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:13:33.708 [2024-11-15 12:48:42.174892] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:13:33.708 [2024-11-15 12:48:42.174899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.174903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.174907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.708 [2024-11-15 12:48:42.174915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.708 [2024-11-15 12:48:42.174933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.708 [2024-11-15 12:48:42.174976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.708 [2024-11-15 12:48:42.174983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.708 [2024-11-15 12:48:42.174987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.174991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.708 [2024-11-15 12:48:42.174996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:13:33.708 [2024-11-15 12:48:42.175005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:33.708 [2024-11-15 12:48:42.175012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.175016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.708 [2024-11-15 12:48:42.175020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.708 [2024-11-15 12:48:42.175027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.708 [2024-11-15 12:48:42.175045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.709 [2024-11-15 12:48:42.175088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.709 [2024-11-15 12:48:42.175095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.709 [2024-11-15 12:48:42.175099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.709 [2024-11-15 12:48:42.175108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:33.709 [2024-11-15 12:48:42.175119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.175134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.709 [2024-11-15 12:48:42.175152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.709 [2024-11-15 12:48:42.175197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.709 [2024-11-15 12:48:42.175204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.709 [2024-11-15 12:48:42.175208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.709 [2024-11-15 12:48:42.175217] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:33.709 [2024-11-15 12:48:42.175222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:33.709 [2024-11-15 12:48:42.175230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:33.709 [2024-11-15 12:48:42.175340] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:13:33.709 [2024-11-15 12:48:42.175351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:33.709 [2024-11-15 12:48:42.175360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.175376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.709 [2024-11-15 12:48:42.175396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.709 [2024-11-15 12:48:42.175438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.709 [2024-11-15 12:48:42.175445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.709 [2024-11-15 12:48:42.175449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.709 [2024-11-15 12:48:42.175458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:33.709 [2024-11-15 12:48:42.175468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.175484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.709 [2024-11-15 12:48:42.175501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.709 [2024-11-15 12:48:42.175550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.709 [2024-11-15 12:48:42.175557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.709 [2024-11-15 12:48:42.175560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.709 [2024-11-15 12:48:42.175569] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:33.709 [2024-11-15 12:48:42.175574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:33.709 [2024-11-15 12:48:42.175582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:13:33.709 [2024-11-15 12:48:42.175596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:33.709 [2024-11-15 12:48:42.175622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.175636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.709 [2024-11-15 12:48:42.175657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.709 [2024-11-15 12:48:42.175746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.709 [2024-11-15 12:48:42.175753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.709 [2024-11-15 12:48:42.175758] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175761] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=4096, cccid=0 00:13:33.709 [2024-11-15 12:48:42.175766] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xada740) on tqpair(0xa76750): expected_datao=0, payload_size=4096 00:13:33.709 [2024-11-15 12:48:42.175770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175778] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175783] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.709 [2024-11-15 12:48:42.175797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.709 [2024-11-15 12:48:42.175801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.709 [2024-11-15 12:48:42.175813] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:13:33.709 [2024-11-15 12:48:42.175818] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:13:33.709 [2024-11-15 12:48:42.175823] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:13:33.709 [2024-11-15 12:48:42.175827] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:13:33.709 [2024-11-15 12:48:42.175831] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:13:33.709 [2024-11-15 12:48:42.175836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:13:33.709 [2024-11-15 12:48:42.175850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:33.709 [2024-11-15 12:48:42.175858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.175874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:33.709 [2024-11-15 12:48:42.175894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.709 [2024-11-15 12:48:42.175941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.709 [2024-11-15 12:48:42.175948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.709 [2024-11-15 12:48:42.175952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.709 [2024-11-15 12:48:42.175963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.175978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.709 [2024-11-15 12:48:42.175984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.175992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.175998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.709 [2024-11-15 12:48:42.176004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.176008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.176012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.176017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.709 [2024-11-15 12:48:42.176023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.176027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.176031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.176037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.709 [2024-11-15 12:48:42.176042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:33.709 [2024-11-15 12:48:42.176055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:33.709 [2024-11-15 12:48:42.176063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.709 [2024-11-15 12:48:42.176067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa76750) 00:13:33.709 [2024-11-15 12:48:42.176074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.709 [2024-11-15 12:48:42.176094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada740, cid 0, qid 0 00:13:33.709 [2024-11-15 12:48:42.176101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xada8c0, cid 1, qid 0 00:13:33.709 [2024-11-15 12:48:42.176106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadaa40, cid 2, qid 0 00:13:33.709 [2024-11-15 12:48:42.176111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.710 [2024-11-15 12:48:42.176115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadad40, cid 4, qid 0 00:13:33.710 [2024-11-15 12:48:42.176198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.710 [2024-11-15 12:48:42.176205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.710 [2024-11-15 12:48:42.176209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadad40) on tqpair=0xa76750 00:13:33.710 [2024-11-15 12:48:42.176218] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:13:33.710 [2024-11-15 12:48:42.176225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa76750) 00:13:33.710 [2024-11-15 12:48:42.176267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:33.710 [2024-11-15 12:48:42.176286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadad40, cid 4, qid 0 00:13:33.710 [2024-11-15 12:48:42.176338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.710 [2024-11-15 12:48:42.176345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.710 [2024-11-15 12:48:42.176349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadad40) on tqpair=0xa76750 00:13:33.710 [2024-11-15 12:48:42.176415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa76750) 00:13:33.710 [2024-11-15 12:48:42.176447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.710 [2024-11-15 12:48:42.176467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadad40, cid 4, qid 0 00:13:33.710 [2024-11-15 12:48:42.176523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.710 [2024-11-15 12:48:42.176530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.710 [2024-11-15 12:48:42.176534] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176538] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=4096, cccid=4 00:13:33.710 [2024-11-15 12:48:42.176542] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xadad40) on tqpair(0xa76750): expected_datao=0, payload_size=4096 00:13:33.710 [2024-11-15 12:48:42.176547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176554] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176559] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.710 [2024-11-15 12:48:42.176573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.710 [2024-11-15 12:48:42.176577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadad40) on tqpair=0xa76750 00:13:33.710 [2024-11-15 12:48:42.176595] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:13:33.710 [2024-11-15 12:48:42.176620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa76750) 00:13:33.710 [2024-11-15 12:48:42.176653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.710 [2024-11-15 12:48:42.176674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadad40, cid 4, qid 0 00:13:33.710 [2024-11-15 12:48:42.176861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.710 [2024-11-15 12:48:42.176868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.710 [2024-11-15 12:48:42.176872] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176876] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=4096, cccid=4 00:13:33.710 [2024-11-15 12:48:42.176880] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xadad40) on tqpair(0xa76750): expected_datao=0, payload_size=4096 00:13:33.710 [2024-11-15 12:48:42.176885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176897] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.710 [2024-11-15 12:48:42.176911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.710 [2024-11-15 12:48:42.176915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadad40) on tqpair=0xa76750 00:13:33.710 [2024-11-15 12:48:42.176936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.176956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.176961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa76750) 00:13:33.710 [2024-11-15 12:48:42.176968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.710 [2024-11-15 12:48:42.176989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadad40, cid 4, qid 0 00:13:33.710 [2024-11-15 12:48:42.177044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.710 [2024-11-15 12:48:42.177059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.710 [2024-11-15 12:48:42.177063] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177067] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=4096, cccid=4 00:13:33.710 [2024-11-15 12:48:42.177072] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xadad40) on tqpair(0xa76750): expected_datao=0, payload_size=4096 00:13:33.710 [2024-11-15 12:48:42.177076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177083] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177088] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.710 [2024-11-15 12:48:42.177102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.710 [2024-11-15 12:48:42.177106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadad40) on tqpair=0xa76750 00:13:33.710 [2024-11-15 12:48:42.177118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.177128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.177138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.177145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.177150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.177156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.177161] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:13:33.710 [2024-11-15 12:48:42.177166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:13:33.710 [2024-11-15 12:48:42.177171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:13:33.710 [2024-11-15 12:48:42.177186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa76750) 00:13:33.710 [2024-11-15 12:48:42.177198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.710 [2024-11-15 12:48:42.177205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa76750) 00:13:33.710 [2024-11-15 12:48:42.177219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.710 [2024-11-15 12:48:42.177244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadad40, cid 4, qid 0 00:13:33.710 [2024-11-15 12:48:42.177251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadaec0, cid 5, qid 0 00:13:33.710 [2024-11-15 12:48:42.177314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.710 [2024-11-15 12:48:42.177321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.710 [2024-11-15 12:48:42.177325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadad40) on tqpair=0xa76750 00:13:33.710 [2024-11-15 12:48:42.177335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.710 [2024-11-15 12:48:42.177341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.710 [2024-11-15 12:48:42.177345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.710 [2024-11-15 12:48:42.177349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadaec0) on tqpair=0xa76750 00:13:33.711 [2024-11-15 12:48:42.177359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa76750) 00:13:33.711 [2024-11-15 12:48:42.177370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.711 [2024-11-15 12:48:42.177387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadaec0, cid 5, qid 0 00:13:33.711 [2024-11-15 12:48:42.177435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.711 [2024-11-15 12:48:42.177442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.711 [2024-11-15 12:48:42.177446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadaec0) on tqpair=0xa76750 00:13:33.711 [2024-11-15 12:48:42.177460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa76750) 00:13:33.711 [2024-11-15 12:48:42.177471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.711 [2024-11-15 12:48:42.177488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadaec0, cid 5, qid 0 00:13:33.711 [2024-11-15 12:48:42.177536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.711 [2024-11-15 12:48:42.177542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.711 [2024-11-15 12:48:42.177546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadaec0) on tqpair=0xa76750 00:13:33.711 [2024-11-15 12:48:42.177560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa76750) 00:13:33.711 [2024-11-15 12:48:42.177572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.711 [2024-11-15 12:48:42.177588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadaec0, cid 5, qid 0 00:13:33.711 [2024-11-15 12:48:42.177645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.711 [2024-11-15 12:48:42.177654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.711 [2024-11-15 12:48:42.177657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadaec0) on tqpair=0xa76750 00:13:33.711 [2024-11-15 12:48:42.177689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa76750) 00:13:33.711 [2024-11-15 12:48:42.177704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.711 [2024-11-15 12:48:42.177712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa76750) 00:13:33.711 [2024-11-15 12:48:42.177722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.711 [2024-11-15 12:48:42.177730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa76750) 00:13:33.711 [2024-11-15 12:48:42.177740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.711 [2024-11-15 12:48:42.177747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa76750) 00:13:33.711 [2024-11-15 12:48:42.177758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.711 [2024-11-15 12:48:42.177780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadaec0, cid 5, qid 0 00:13:33.711 [2024-11-15 12:48:42.177787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadad40, cid 4, qid 0 00:13:33.711 [2024-11-15 12:48:42.177792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadb040, cid 6, qid 0 00:13:33.711 [2024-11-15 12:48:42.177797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadb1c0, cid 7, qid 0 00:13:33.711 [2024-11-15 12:48:42.177925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.711 [2024-11-15 12:48:42.177932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.711 [2024-11-15 12:48:42.177936] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177940] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=8192, cccid=5 00:13:33.711 [2024-11-15 12:48:42.177944] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xadaec0) on tqpair(0xa76750): expected_datao=0, payload_size=8192 00:13:33.711 [2024-11-15 12:48:42.177949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177965] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177970] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.711 [2024-11-15 12:48:42.177982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.711 [2024-11-15 12:48:42.177985] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.177989] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=512, cccid=4 00:13:33.711 [2024-11-15 12:48:42.177993] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xadad40) on tqpair(0xa76750): expected_datao=0, payload_size=512 00:13:33.711 [2024-11-15 12:48:42.177998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178004] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178008] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.711 [2024-11-15 12:48:42.178019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.711 [2024-11-15 12:48:42.178023] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178026] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=512, cccid=6 00:13:33.711 [2024-11-15 12:48:42.178030] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xadb040) on tqpair(0xa76750): expected_datao=0, payload_size=512 00:13:33.711 [2024-11-15 12:48:42.178035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178041] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178045] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:33.711 [2024-11-15 12:48:42.178056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:33.711 [2024-11-15 12:48:42.178060] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178063] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa76750): datao=0, datal=4096, cccid=7 00:13:33.711 [2024-11-15 12:48:42.178068] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xadb1c0) on tqpair(0xa76750): expected_datao=0, payload_size=4096 00:13:33.711 [2024-11-15 12:48:42.178072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178078] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178083] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.711 [2024-11-15 12:48:42.178097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.711 [2024-11-15 12:48:42.178100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadaec0) on tqpair=0xa76750 00:13:33.711 [2024-11-15 12:48:42.178119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.711 [2024-11-15 12:48:42.178125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.711 [2024-11-15 12:48:42.178129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadad40) on tqpair=0xa76750 00:13:33.711 [2024-11-15 12:48:42.178144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.711 [2024-11-15 12:48:42.178150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.711 [2024-11-15 12:48:42.178153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadb040) on tqpair=0xa76750 00:13:33.711 [2024-11-15 12:48:42.178164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.711 [2024-11-15 12:48:42.178170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.711 [2024-11-15 12:48:42.178174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.711 [2024-11-15 12:48:42.178178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadb1c0) on tqpair=0xa76750 00:13:33.711 ===================================================== 00:13:33.711 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.711 ===================================================== 00:13:33.711 Controller Capabilities/Features 00:13:33.711 ================================ 00:13:33.711 Vendor ID: 8086 00:13:33.711 Subsystem Vendor ID: 8086 00:13:33.711 Serial Number: SPDK00000000000001 00:13:33.711 Model Number: SPDK bdev Controller 00:13:33.711 Firmware Version: 25.01 00:13:33.711 Recommended Arb Burst: 6 00:13:33.711 IEEE OUI Identifier: e4 d2 5c 00:13:33.711 Multi-path I/O 00:13:33.711 May have multiple subsystem ports: Yes 00:13:33.711 May have multiple controllers: Yes 00:13:33.711 Associated with SR-IOV VF: No 00:13:33.711 Max Data Transfer Size: 131072 00:13:33.711 Max Number of Namespaces: 32 00:13:33.711 Max Number of I/O Queues: 127 00:13:33.711 NVMe Specification Version (VS): 1.3 00:13:33.711 NVMe Specification Version (Identify): 1.3 00:13:33.711 Maximum Queue Entries: 128 00:13:33.711 Contiguous Queues Required: Yes 00:13:33.711 Arbitration Mechanisms Supported 00:13:33.711 Weighted Round Robin: Not Supported 00:13:33.711 Vendor Specific: Not Supported 00:13:33.711 Reset Timeout: 15000 ms 00:13:33.711 Doorbell Stride: 4 bytes 00:13:33.711 NVM Subsystem Reset: Not Supported 00:13:33.711 Command Sets Supported 00:13:33.712 NVM Command Set: Supported 00:13:33.712 Boot Partition: Not Supported 00:13:33.712 Memory Page Size Minimum: 4096 bytes 00:13:33.712 Memory Page Size Maximum: 4096 bytes 00:13:33.712 Persistent Memory Region: Not Supported 00:13:33.712 Optional Asynchronous Events Supported 00:13:33.712 Namespace Attribute Notices: Supported 00:13:33.712 Firmware Activation Notices: Not Supported 00:13:33.712 ANA Change Notices: Not Supported 00:13:33.712 PLE Aggregate Log Change Notices: Not Supported 00:13:33.712 LBA Status Info Alert Notices: Not Supported 00:13:33.712 EGE Aggregate Log Change Notices: Not Supported 00:13:33.712 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.712 Zone Descriptor Change Notices: Not Supported 00:13:33.712 Discovery Log Change Notices: Not Supported 00:13:33.712 Controller Attributes 00:13:33.712 128-bit Host Identifier: Supported 00:13:33.712 Non-Operational Permissive Mode: Not Supported 00:13:33.712 NVM Sets: Not Supported 00:13:33.712 Read Recovery Levels: Not Supported 00:13:33.712 Endurance Groups: Not Supported 00:13:33.712 Predictable Latency Mode: Not Supported 00:13:33.712 Traffic Based Keep ALive: Not Supported 00:13:33.712 Namespace Granularity: Not Supported 00:13:33.712 SQ Associations: Not Supported 00:13:33.712 UUID List: Not Supported 00:13:33.712 Multi-Domain Subsystem: Not Supported 00:13:33.712 Fixed Capacity Management: Not Supported 00:13:33.712 Variable Capacity Management: Not Supported 00:13:33.712 Delete Endurance Group: Not Supported 00:13:33.712 Delete NVM Set: Not Supported 00:13:33.712 Extended LBA Formats Supported: Not Supported 00:13:33.712 Flexible Data Placement Supported: Not Supported 00:13:33.712 00:13:33.712 Controller Memory Buffer Support 00:13:33.712 ================================ 00:13:33.712 Supported: No 00:13:33.712 00:13:33.712 Persistent Memory Region Support 00:13:33.712 ================================ 00:13:33.712 Supported: No 00:13:33.712 00:13:33.712 Admin Command Set Attributes 00:13:33.712 ============================ 00:13:33.712 Security Send/Receive: Not Supported 00:13:33.712 Format NVM: Not Supported 00:13:33.712 Firmware Activate/Download: Not Supported 00:13:33.712 Namespace Management: Not Supported 00:13:33.712 Device Self-Test: Not Supported 00:13:33.712 Directives: Not Supported 00:13:33.712 NVMe-MI: Not Supported 00:13:33.712 Virtualization Management: Not Supported 00:13:33.712 Doorbell Buffer Config: Not Supported 00:13:33.712 Get LBA Status Capability: Not Supported 00:13:33.712 Command & Feature Lockdown Capability: Not Supported 00:13:33.712 Abort Command Limit: 4 00:13:33.712 Async Event Request Limit: 4 00:13:33.712 Number of Firmware Slots: N/A 00:13:33.712 Firmware Slot 1 Read-Only: N/A 00:13:33.712 Firmware Activation Without Reset: N/A 00:13:33.712 Multiple Update Detection Support: N/A 00:13:33.712 Firmware Update Granularity: No Information Provided 00:13:33.712 Per-Namespace SMART Log: No 00:13:33.712 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.712 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:33.712 Command Effects Log Page: Supported 00:13:33.712 Get Log Page Extended Data: Supported 00:13:33.712 Telemetry Log Pages: Not Supported 00:13:33.712 Persistent Event Log Pages: Not Supported 00:13:33.712 Supported Log Pages Log Page: May Support 00:13:33.712 Commands Supported & Effects Log Page: Not Supported 00:13:33.712 Feature Identifiers & Effects Log Page:May Support 00:13:33.712 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.712 Data Area 4 for Telemetry Log: Not Supported 00:13:33.712 Error Log Page Entries Supported: 128 00:13:33.712 Keep Alive: Supported 00:13:33.712 Keep Alive Granularity: 10000 ms 00:13:33.712 00:13:33.712 NVM Command Set Attributes 00:13:33.712 ========================== 00:13:33.712 Submission Queue Entry Size 00:13:33.712 Max: 64 00:13:33.712 Min: 64 00:13:33.712 Completion Queue Entry Size 00:13:33.712 Max: 16 00:13:33.712 Min: 16 00:13:33.712 Number of Namespaces: 32 00:13:33.712 Compare Command: Supported 00:13:33.712 Write Uncorrectable Command: Not Supported 00:13:33.712 Dataset Management Command: Supported 00:13:33.712 Write Zeroes Command: Supported 00:13:33.712 Set Features Save Field: Not Supported 00:13:33.712 Reservations: Supported 00:13:33.712 Timestamp: Not Supported 00:13:33.712 Copy: Supported 00:13:33.712 Volatile Write Cache: Present 00:13:33.712 Atomic Write Unit (Normal): 1 00:13:33.712 Atomic Write Unit (PFail): 1 00:13:33.712 Atomic Compare & Write Unit: 1 00:13:33.712 Fused Compare & Write: Supported 00:13:33.712 Scatter-Gather List 00:13:33.712 SGL Command Set: Supported 00:13:33.712 SGL Keyed: Supported 00:13:33.712 SGL Bit Bucket Descriptor: Not Supported 00:13:33.712 SGL Metadata Pointer: Not Supported 00:13:33.712 Oversized SGL: Not Supported 00:13:33.712 SGL Metadata Address: Not Supported 00:13:33.712 SGL Offset: Supported 00:13:33.712 Transport SGL Data Block: Not Supported 00:13:33.712 Replay Protected Memory Block: Not Supported 00:13:33.712 00:13:33.712 Firmware Slot Information 00:13:33.712 ========================= 00:13:33.712 Active slot: 1 00:13:33.712 Slot 1 Firmware Revision: 25.01 00:13:33.712 00:13:33.712 00:13:33.712 Commands Supported and Effects 00:13:33.712 ============================== 00:13:33.712 Admin Commands 00:13:33.712 -------------- 00:13:33.712 Get Log Page (02h): Supported 00:13:33.712 Identify (06h): Supported 00:13:33.712 Abort (08h): Supported 00:13:33.712 Set Features (09h): Supported 00:13:33.712 Get Features (0Ah): Supported 00:13:33.712 Asynchronous Event Request (0Ch): Supported 00:13:33.712 Keep Alive (18h): Supported 00:13:33.712 I/O Commands 00:13:33.712 ------------ 00:13:33.712 Flush (00h): Supported LBA-Change 00:13:33.712 Write (01h): Supported LBA-Change 00:13:33.712 Read (02h): Supported 00:13:33.712 Compare (05h): Supported 00:13:33.712 Write Zeroes (08h): Supported LBA-Change 00:13:33.712 Dataset Management (09h): Supported LBA-Change 00:13:33.712 Copy (19h): Supported LBA-Change 00:13:33.712 00:13:33.712 Error Log 00:13:33.712 ========= 00:13:33.712 00:13:33.712 Arbitration 00:13:33.712 =========== 00:13:33.712 Arbitration Burst: 1 00:13:33.712 00:13:33.712 Power Management 00:13:33.712 ================ 00:13:33.712 Number of Power States: 1 00:13:33.712 Current Power State: Power State #0 00:13:33.712 Power State #0: 00:13:33.712 Max Power: 0.00 W 00:13:33.712 Non-Operational State: Operational 00:13:33.712 Entry Latency: Not Reported 00:13:33.712 Exit Latency: Not Reported 00:13:33.712 Relative Read Throughput: 0 00:13:33.712 Relative Read Latency: 0 00:13:33.712 Relative Write Throughput: 0 00:13:33.712 Relative Write Latency: 0 00:13:33.712 Idle Power: Not Reported 00:13:33.712 Active Power: Not Reported 00:13:33.712 Non-Operational Permissive Mode: Not Supported 00:13:33.712 00:13:33.712 Health Information 00:13:33.712 ================== 00:13:33.712 Critical Warnings: 00:13:33.712 Available Spare Space: OK 00:13:33.712 Temperature: OK 00:13:33.712 Device Reliability: OK 00:13:33.712 Read Only: No 00:13:33.712 Volatile Memory Backup: OK 00:13:33.712 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:33.712 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:33.712 Available Spare: 0% 00:13:33.712 Available Spare Threshold: 0% 00:13:33.712 Life Percentage Used:[2024-11-15 12:48:42.178269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.712 [2024-11-15 12:48:42.178276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa76750) 00:13:33.712 [2024-11-15 12:48:42.178283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.712 [2024-11-15 12:48:42.178305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadb1c0, cid 7, qid 0 00:13:33.712 [2024-11-15 12:48:42.178350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.712 [2024-11-15 12:48:42.178357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.712 [2024-11-15 12:48:42.178361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.712 [2024-11-15 12:48:42.178365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadb1c0) on tqpair=0xa76750 00:13:33.712 [2024-11-15 12:48:42.178401] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:13:33.712 [2024-11-15 12:48:42.178412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada740) on tqpair=0xa76750 00:13:33.712 [2024-11-15 12:48:42.178418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.712 [2024-11-15 12:48:42.178423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xada8c0) on tqpair=0xa76750 00:13:33.712 [2024-11-15 12:48:42.178428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.712 [2024-11-15 12:48:42.178433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadaa40) on tqpair=0xa76750 00:13:33.712 [2024-11-15 12:48:42.178438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.712 [2024-11-15 12:48:42.178442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.713 [2024-11-15 12:48:42.178455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.178471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.178492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.178537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.178544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.178547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.178559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.178574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.178595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.178667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.178675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.178678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.178688] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:13:33.713 [2024-11-15 12:48:42.178693] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:13:33.713 [2024-11-15 12:48:42.178703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.178719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.178738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.178781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.178788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.178791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.178806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.178822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.178839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.178884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.178891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.178895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.178909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.178917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.178924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.178941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.178983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.178994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.178999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.179013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.179029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.179047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.179095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.179106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.179110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.179125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.179141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.179158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.179200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.179207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.179210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.179224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.179240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.179257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.179299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.179306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.179309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.179323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.179338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.713 [2024-11-15 12:48:42.179355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.713 [2024-11-15 12:48:42.179397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.713 [2024-11-15 12:48:42.179404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.713 [2024-11-15 12:48:42.179407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.713 [2024-11-15 12:48:42.179421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.713 [2024-11-15 12:48:42.179429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.713 [2024-11-15 12:48:42.179436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.179453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.179495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.179501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.179505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.179519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.179534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.179551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.179594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.179611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.179632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.179648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.179664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.179683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.179734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.179741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.179745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.179759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.179775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.179793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.179842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.179849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.179853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.179867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.179883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.179900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.179946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.179967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.179971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.179986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.179994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.180017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.180070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.180076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.180080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.180094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.180126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.180174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.180180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.180184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.180198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.180230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.180276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.180282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.180286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.180300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.180332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.180380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.180387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.180390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.180404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.180436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.180482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.180488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.180492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.180506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.180538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.180583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.180589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.180593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.180607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.714 [2024-11-15 12:48:42.180655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.714 [2024-11-15 12:48:42.180705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.714 [2024-11-15 12:48:42.180711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.714 [2024-11-15 12:48:42.180715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.714 [2024-11-15 12:48:42.180730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.714 [2024-11-15 12:48:42.180738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.714 [2024-11-15 12:48:42.180745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.180762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.180810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.180816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.180820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.180824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.180834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.180839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.180843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.180850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.180866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.180908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.180915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.180919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.180922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.180932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.180937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.180941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.180948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.180964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.715 [2024-11-15 12:48:42.181896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.715 [2024-11-15 12:48:42.181939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.715 [2024-11-15 12:48:42.181956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.715 [2024-11-15 12:48:42.181959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.715 [2024-11-15 12:48:42.181973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.715 [2024-11-15 12:48:42.181982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.715 [2024-11-15 12:48:42.181989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.716 [2024-11-15 12:48:42.182005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.716 [2024-11-15 12:48:42.182050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.716 [2024-11-15 12:48:42.182056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.716 [2024-11-15 12:48:42.182060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.716 [2024-11-15 12:48:42.182074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.716 [2024-11-15 12:48:42.182089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.716 [2024-11-15 12:48:42.182105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.716 [2024-11-15 12:48:42.182150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.716 [2024-11-15 12:48:42.182156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.716 [2024-11-15 12:48:42.182160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.716 [2024-11-15 12:48:42.182174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.716 [2024-11-15 12:48:42.182189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.716 [2024-11-15 12:48:42.182206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.716 [2024-11-15 12:48:42.182249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.716 [2024-11-15 12:48:42.182256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.716 [2024-11-15 12:48:42.182260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.716 [2024-11-15 12:48:42.182275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.716 [2024-11-15 12:48:42.182290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.716 [2024-11-15 12:48:42.182307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.716 [2024-11-15 12:48:42.182354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.716 [2024-11-15 12:48:42.182360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.716 [2024-11-15 12:48:42.182364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.716 [2024-11-15 12:48:42.182378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.716 [2024-11-15 12:48:42.182393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.716 [2024-11-15 12:48:42.182410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.716 [2024-11-15 12:48:42.182455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.716 [2024-11-15 12:48:42.182461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.716 [2024-11-15 12:48:42.182465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.716 [2024-11-15 12:48:42.182479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.716 [2024-11-15 12:48:42.182494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.716 [2024-11-15 12:48:42.182511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.716 [2024-11-15 12:48:42.182555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.716 [2024-11-15 12:48:42.182562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.716 [2024-11-15 12:48:42.182565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.716 [2024-11-15 12:48:42.182580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.182588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa76750) 00:13:33.716 [2024-11-15 12:48:42.182595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.716 [2024-11-15 12:48:42.186634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xadabc0, cid 3, qid 0 00:13:33.716 [2024-11-15 12:48:42.186684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:33.716 [2024-11-15 12:48:42.186692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:33.716 [2024-11-15 12:48:42.186696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:33.716 [2024-11-15 12:48:42.186700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xadabc0) on tqpair=0xa76750 00:13:33.716 [2024-11-15 12:48:42.186709] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 8 milliseconds 00:13:33.716 0% 00:13:33.716 Data Units Read: 0 00:13:33.716 Data Units Written: 0 00:13:33.716 Host Read Commands: 0 00:13:33.716 Host Write Commands: 0 00:13:33.716 Controller Busy Time: 0 minutes 00:13:33.716 Power Cycles: 0 00:13:33.716 Power On Hours: 0 hours 00:13:33.716 Unsafe Shutdowns: 0 00:13:33.716 Unrecoverable Media Errors: 0 00:13:33.716 Lifetime Error Log Entries: 0 00:13:33.716 Warning Temperature Time: 0 minutes 00:13:33.716 Critical Temperature Time: 0 minutes 00:13:33.716 00:13:33.716 Number of Queues 00:13:33.716 ================ 00:13:33.716 Number of I/O Submission Queues: 127 00:13:33.716 Number of I/O Completion Queues: 127 00:13:33.716 00:13:33.716 Active Namespaces 00:13:33.716 ================= 00:13:33.716 Namespace ID:1 00:13:33.716 Error Recovery Timeout: Unlimited 00:13:33.716 Command Set Identifier: NVM (00h) 00:13:33.716 Deallocate: Supported 00:13:33.716 Deallocated/Unwritten Error: Not Supported 00:13:33.716 Deallocated Read Value: Unknown 00:13:33.716 Deallocate in Write Zeroes: Not Supported 00:13:33.716 Deallocated Guard Field: 0xFFFF 00:13:33.716 Flush: Supported 00:13:33.716 Reservation: Supported 00:13:33.716 Namespace Sharing Capabilities: Multiple Controllers 00:13:33.716 Size (in LBAs): 131072 (0GiB) 00:13:33.716 Capacity (in LBAs): 131072 (0GiB) 00:13:33.716 Utilization (in LBAs): 131072 (0GiB) 00:13:33.716 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:33.716 EUI64: ABCDEF0123456789 00:13:33.716 UUID: 0fd96e6e-bb82-4281-b49f-f41f69883985 00:13:33.716 Thin Provisioning: Not Supported 00:13:33.716 Per-NS Atomic Units: Yes 00:13:33.716 Atomic Boundary Size (Normal): 0 00:13:33.716 Atomic Boundary Size (PFail): 0 00:13:33.716 Atomic Boundary Offset: 0 00:13:33.716 Maximum Single Source Range Length: 65535 00:13:33.716 Maximum Copy Length: 65535 00:13:33.716 Maximum Source Range Count: 1 00:13:33.716 NGUID/EUI64 Never Reused: No 00:13:33.716 Namespace Write Protected: No 00:13:33.716 Number of LBA Formats: 1 00:13:33.716 Current LBA Format: LBA Format #00 00:13:33.716 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.716 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.716 rmmod nvme_tcp 00:13:33.716 rmmod nvme_fabrics 00:13:33.716 rmmod nvme_keyring 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73536 ']' 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73536 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73536 ']' 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73536 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.716 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73536 00:13:33.717 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.717 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.717 killing process with pid 73536 00:13:33.717 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73536' 00:13:33.717 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73536 00:13:33.717 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73536 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:33.976 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:33.977 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:13:34.237 00:13:34.237 real 0m2.057s 00:13:34.237 user 0m4.083s 00:13:34.237 sys 0m0.668s 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.237 ************************************ 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:34.237 END TEST nvmf_identify 00:13:34.237 ************************************ 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:34.237 ************************************ 00:13:34.237 START TEST nvmf_perf 00:13:34.237 ************************************ 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:34.237 * Looking for test storage... 00:13:34.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:13:34.237 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.497 12:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:34.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.497 --rc genhtml_branch_coverage=1 00:13:34.497 --rc genhtml_function_coverage=1 00:13:34.497 --rc genhtml_legend=1 00:13:34.497 --rc geninfo_all_blocks=1 00:13:34.497 --rc geninfo_unexecuted_blocks=1 00:13:34.497 00:13:34.497 ' 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:34.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.497 --rc genhtml_branch_coverage=1 00:13:34.497 --rc genhtml_function_coverage=1 00:13:34.497 --rc genhtml_legend=1 00:13:34.497 --rc geninfo_all_blocks=1 00:13:34.497 --rc geninfo_unexecuted_blocks=1 00:13:34.497 00:13:34.497 ' 00:13:34.497 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:34.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.497 --rc genhtml_branch_coverage=1 00:13:34.497 --rc genhtml_function_coverage=1 00:13:34.497 --rc genhtml_legend=1 00:13:34.497 --rc geninfo_all_blocks=1 00:13:34.497 --rc geninfo_unexecuted_blocks=1 00:13:34.497 00:13:34.497 ' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:34.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.498 --rc genhtml_branch_coverage=1 00:13:34.498 --rc genhtml_function_coverage=1 00:13:34.498 --rc genhtml_legend=1 00:13:34.498 --rc geninfo_all_blocks=1 00:13:34.498 --rc geninfo_unexecuted_blocks=1 00:13:34.498 00:13:34.498 ' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.498 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:34.498 Cannot find device "nvmf_init_br" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:34.498 Cannot find device "nvmf_init_br2" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:34.498 Cannot find device "nvmf_tgt_br" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.498 Cannot find device "nvmf_tgt_br2" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:34.498 Cannot find device "nvmf_init_br" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:34.498 Cannot find device "nvmf_init_br2" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:34.498 Cannot find device "nvmf_tgt_br" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:34.498 Cannot find device "nvmf_tgt_br2" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:34.498 Cannot find device "nvmf_br" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:34.498 Cannot find device "nvmf_init_if" 00:13:34.498 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:13:34.499 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:34.758 Cannot find device "nvmf_init_if2" 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:34.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:34.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:13:34.758 00:13:34.758 --- 10.0.0.3 ping statistics --- 00:13:34.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.758 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:34.758 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:34.758 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:13:34.758 00:13:34.758 --- 10.0.0.4 ping statistics --- 00:13:34.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.758 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:34.758 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:35.017 00:13:35.017 --- 10.0.0.1 ping statistics --- 00:13:35.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.017 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:35.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:35.017 00:13:35.017 --- 10.0.0.2 ping statistics --- 00:13:35.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.017 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=73788 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 73788 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 73788 ']' 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.017 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.018 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.018 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.018 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:35.018 [2024-11-15 12:48:43.516394] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:35.018 [2024-11-15 12:48:43.516481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.018 [2024-11-15 12:48:43.662786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.277 [2024-11-15 12:48:43.691732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.277 [2024-11-15 12:48:43.691971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.277 [2024-11-15 12:48:43.692045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.277 [2024-11-15 12:48:43.692142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.277 [2024-11-15 12:48:43.692198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.277 [2024-11-15 12:48:43.693016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.277 [2024-11-15 12:48:43.693165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.277 [2024-11-15 12:48:43.693270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.277 [2024-11-15 12:48:43.693285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.277 [2024-11-15 12:48:43.720787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:35.277 12:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:35.845 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:35.845 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:36.103 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:13:36.103 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:36.361 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:36.361 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:13:36.361 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:36.362 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:36.362 12:48:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:36.620 [2024-11-15 12:48:45.054269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.621 12:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:36.880 12:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:36.880 12:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.139 12:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:37.139 12:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:37.399 12:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:37.399 [2024-11-15 12:48:46.039401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:37.399 12:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:37.657 12:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:13:37.657 12:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:37.657 12:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:37.657 12:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:39.036 Initializing NVMe Controllers 00:13:39.036 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:39.036 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:39.036 Initialization complete. Launching workers. 00:13:39.036 ======================================================== 00:13:39.036 Latency(us) 00:13:39.036 Device Information : IOPS MiB/s Average min max 00:13:39.036 PCIE (0000:00:10.0) NSID 1 from core 0: 21973.03 85.83 1456.39 379.74 9041.33 00:13:39.036 ======================================================== 00:13:39.036 Total : 21973.03 85.83 1456.39 379.74 9041.33 00:13:39.036 00:13:39.036 12:48:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:40.415 Initializing NVMe Controllers 00:13:40.415 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.415 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:40.415 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:40.415 Initialization complete. Launching workers. 00:13:40.415 ======================================================== 00:13:40.415 Latency(us) 00:13:40.415 Device Information : IOPS MiB/s Average min max 00:13:40.415 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4167.00 16.28 235.83 92.58 7179.65 00:13:40.415 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.00 0.49 7984.93 6935.96 12018.43 00:13:40.415 ======================================================== 00:13:40.415 Total : 4293.00 16.77 463.27 92.58 12018.43 00:13:40.415 00:13:40.415 12:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:41.792 Initializing NVMe Controllers 00:13:41.792 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.792 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:41.792 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:41.792 Initialization complete. Launching workers. 00:13:41.792 ======================================================== 00:13:41.792 Latency(us) 00:13:41.792 Device Information : IOPS MiB/s Average min max 00:13:41.792 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9534.56 37.24 3356.05 516.45 7730.37 00:13:41.792 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.61 15.63 8028.00 6298.08 12517.62 00:13:41.792 ======================================================== 00:13:41.792 Total : 13535.18 52.87 4736.95 516.45 12517.62 00:13:41.792 00:13:41.792 12:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:41.792 12:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:44.329 Initializing NVMe Controllers 00:13:44.329 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.329 Controller IO queue size 128, less than required. 00:13:44.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.329 Controller IO queue size 128, less than required. 00:13:44.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.329 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:44.329 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:44.329 Initialization complete. Launching workers. 00:13:44.329 ======================================================== 00:13:44.329 Latency(us) 00:13:44.329 Device Information : IOPS MiB/s Average min max 00:13:44.329 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2046.98 511.74 63089.32 33291.61 98522.60 00:13:44.329 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 669.49 167.37 196707.30 59966.11 326458.40 00:13:44.329 ======================================================== 00:13:44.329 Total : 2716.47 679.12 96020.39 33291.61 326458.40 00:13:44.329 00:13:44.329 12:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:13:44.329 Initializing NVMe Controllers 00:13:44.329 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.329 Controller IO queue size 128, less than required. 00:13:44.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.329 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:44.329 Controller IO queue size 128, less than required. 00:13:44.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.329 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:44.329 WARNING: Some requested NVMe devices were skipped 00:13:44.329 No valid NVMe controllers or AIO or URING devices found 00:13:44.329 12:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:13:46.863 Initializing NVMe Controllers 00:13:46.863 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:46.863 Controller IO queue size 128, less than required. 00:13:46.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:46.863 Controller IO queue size 128, less than required. 00:13:46.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:46.864 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:46.864 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:46.864 Initialization complete. Launching workers. 00:13:46.864 00:13:46.864 ==================== 00:13:46.864 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:46.864 TCP transport: 00:13:46.864 polls: 9266 00:13:46.864 idle_polls: 4887 00:13:46.864 sock_completions: 4379 00:13:46.864 nvme_completions: 6947 00:13:46.864 submitted_requests: 10462 00:13:46.864 queued_requests: 1 00:13:46.864 00:13:46.864 ==================== 00:13:46.864 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:46.864 TCP transport: 00:13:46.864 polls: 10195 00:13:46.864 idle_polls: 5762 00:13:46.864 sock_completions: 4433 00:13:46.864 nvme_completions: 7217 00:13:46.864 submitted_requests: 10804 00:13:46.864 queued_requests: 1 00:13:46.864 ======================================================== 00:13:46.864 Latency(us) 00:13:46.864 Device Information : IOPS MiB/s Average min max 00:13:46.864 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1732.87 433.22 74920.09 39848.18 107454.18 00:13:46.864 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1800.23 450.06 72166.66 25052.55 112535.73 00:13:46.864 ======================================================== 00:13:46.864 Total : 3533.10 883.27 73517.13 25052.55 112535.73 00:13:46.864 00:13:46.864 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:13:46.864 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:47.122 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:47.122 rmmod nvme_tcp 00:13:47.122 rmmod nvme_fabrics 00:13:47.122 rmmod nvme_keyring 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 73788 ']' 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 73788 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 73788 ']' 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 73788 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73788 00:13:47.381 killing process with pid 73788 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73788' 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 73788 00:13:47.381 12:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 73788 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:47.640 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:13:47.899 00:13:47.899 real 0m13.659s 00:13:47.899 user 0m49.251s 00:13:47.899 sys 0m3.891s 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.899 ************************************ 00:13:47.899 END TEST nvmf_perf 00:13:47.899 ************************************ 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:47.899 ************************************ 00:13:47.899 START TEST nvmf_fio_host 00:13:47.899 ************************************ 00:13:47.899 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:48.159 * Looking for test storage... 00:13:48.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:13:48.159 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:48.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.160 --rc genhtml_branch_coverage=1 00:13:48.160 --rc genhtml_function_coverage=1 00:13:48.160 --rc genhtml_legend=1 00:13:48.160 --rc geninfo_all_blocks=1 00:13:48.160 --rc geninfo_unexecuted_blocks=1 00:13:48.160 00:13:48.160 ' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:48.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.160 --rc genhtml_branch_coverage=1 00:13:48.160 --rc genhtml_function_coverage=1 00:13:48.160 --rc genhtml_legend=1 00:13:48.160 --rc geninfo_all_blocks=1 00:13:48.160 --rc geninfo_unexecuted_blocks=1 00:13:48.160 00:13:48.160 ' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:48.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.160 --rc genhtml_branch_coverage=1 00:13:48.160 --rc genhtml_function_coverage=1 00:13:48.160 --rc genhtml_legend=1 00:13:48.160 --rc geninfo_all_blocks=1 00:13:48.160 --rc geninfo_unexecuted_blocks=1 00:13:48.160 00:13:48.160 ' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:48.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.160 --rc genhtml_branch_coverage=1 00:13:48.160 --rc genhtml_function_coverage=1 00:13:48.160 --rc genhtml_legend=1 00:13:48.160 --rc geninfo_all_blocks=1 00:13:48.160 --rc geninfo_unexecuted_blocks=1 00:13:48.160 00:13:48.160 ' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.160 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:48.161 Cannot find device "nvmf_init_br" 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:48.161 Cannot find device "nvmf_init_br2" 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:48.161 Cannot find device "nvmf_tgt_br" 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.161 Cannot find device "nvmf_tgt_br2" 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:48.161 Cannot find device "nvmf_init_br" 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:13:48.161 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:48.420 Cannot find device "nvmf_init_br2" 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:48.420 Cannot find device "nvmf_tgt_br" 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:48.420 Cannot find device "nvmf_tgt_br2" 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:48.420 Cannot find device "nvmf_br" 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:48.420 Cannot find device "nvmf_init_if" 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:48.420 Cannot find device "nvmf_init_if2" 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:48.420 12:48:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:48.420 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:48.680 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:48.680 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:13:48.680 00:13:48.680 --- 10.0.0.3 ping statistics --- 00:13:48.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.680 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:48.680 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:48.680 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:13:48.680 00:13:48.680 --- 10.0.0.4 ping statistics --- 00:13:48.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.680 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:48.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:48.680 00:13:48.680 --- 10.0.0.1 ping statistics --- 00:13:48.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.680 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:48.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:13:48.680 00:13:48.680 --- 10.0.0.2 ping statistics --- 00:13:48.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.680 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.680 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74233 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74233 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74233 ']' 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.681 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:48.681 [2024-11-15 12:48:57.250362] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:48.681 [2024-11-15 12:48:57.250445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.940 [2024-11-15 12:48:57.402162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.940 [2024-11-15 12:48:57.442724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.940 [2024-11-15 12:48:57.442782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.940 [2024-11-15 12:48:57.442796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.940 [2024-11-15 12:48:57.442806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.940 [2024-11-15 12:48:57.442815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.940 [2024-11-15 12:48:57.443938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.940 [2024-11-15 12:48:57.444076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.940 [2024-11-15 12:48:57.444211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.940 [2024-11-15 12:48:57.444217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.940 [2024-11-15 12:48:57.479819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:48.940 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.940 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:13:48.940 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:49.199 [2024-11-15 12:48:57.819267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.199 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:13:49.199 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.199 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:49.457 12:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:49.716 Malloc1 00:13:49.716 12:48:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:49.974 12:48:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.232 12:48:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:50.232 [2024-11-15 12:48:58.884627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:50.490 12:48:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:13:50.490 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:50.748 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:50.748 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:50.748 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:50.748 12:48:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:50.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:50.748 fio-3.35 00:13:50.748 Starting 1 thread 00:13:53.280 00:13:53.280 test: (groupid=0, jobs=1): err= 0: pid=74309: Fri Nov 15 12:49:01 2024 00:13:53.280 read: IOPS=9516, BW=37.2MiB/s (39.0MB/s)(74.6MiB/2006msec) 00:13:53.280 slat (nsec): min=1780, max=295729, avg=2344.00, stdev=2791.94 00:13:53.280 clat (usec): min=1883, max=12588, avg=7005.75, stdev=559.50 00:13:53.280 lat (usec): min=1918, max=12590, avg=7008.09, stdev=559.29 00:13:53.280 clat percentiles (usec): 00:13:53.280 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:13:53.280 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7046], 00:13:53.280 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7898], 00:13:53.280 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[11076], 99.95th=[11994], 00:13:53.280 | 99.99th=[12518] 00:13:53.280 bw ( KiB/s): min=37400, max=38888, per=99.93%, avg=38042.00, stdev=759.80, samples=4 00:13:53.280 iops : min= 9350, max= 9722, avg=9510.50, stdev=189.95, samples=4 00:13:53.280 write: IOPS=9524, BW=37.2MiB/s (39.0MB/s)(74.6MiB/2006msec); 0 zone resets 00:13:53.280 slat (nsec): min=1842, max=154532, avg=2405.56, stdev=1798.61 00:13:53.280 clat (usec): min=1781, max=11842, avg=6391.44, stdev=509.84 00:13:53.280 lat (usec): min=1794, max=11844, avg=6393.84, stdev=509.74 00:13:53.280 clat percentiles (usec): 00:13:53.280 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 5997], 00:13:53.280 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:13:53.280 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7242], 00:13:53.280 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[ 9765], 99.95th=[11076], 00:13:53.280 | 99.99th=[11863] 00:13:53.280 bw ( KiB/s): min=37696, max=38784, per=100.00%, avg=38098.00, stdev=485.58, samples=4 00:13:53.280 iops : min= 9424, max= 9696, avg=9524.50, stdev=121.40, samples=4 00:13:53.280 lat (msec) : 2=0.02%, 4=0.16%, 10=99.70%, 20=0.12% 00:13:53.280 cpu : usr=69.48%, sys=23.74%, ctx=10, majf=0, minf=7 00:13:53.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:13:53.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.280 issued rwts: total=19091,19106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.280 00:13:53.280 Run status group 0 (all jobs): 00:13:53.280 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.6MiB (78.2MB), run=2006-2006msec 00:13:53.280 WRITE: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.6MiB (78.3MB), run=2006-2006msec 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:53.280 12:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:53.280 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:13:53.280 fio-3.35 00:13:53.280 Starting 1 thread 00:13:55.812 00:13:55.812 test: (groupid=0, jobs=1): err= 0: pid=74352: Fri Nov 15 12:49:04 2024 00:13:55.812 read: IOPS=8865, BW=139MiB/s (145MB/s)(278MiB/2007msec) 00:13:55.812 slat (usec): min=2, max=110, avg= 3.67, stdev= 2.31 00:13:55.812 clat (usec): min=1630, max=16333, avg=8145.21, stdev=2587.56 00:13:55.812 lat (usec): min=1633, max=16336, avg=8148.88, stdev=2587.67 00:13:55.812 clat percentiles (usec): 00:13:55.812 | 1.00th=[ 3654], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5735], 00:13:55.812 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 7898], 60.00th=[ 8586], 00:13:55.812 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11338], 95.00th=[13173], 00:13:55.812 | 99.00th=[15139], 99.50th=[15401], 99.90th=[16057], 99.95th=[16188], 00:13:55.812 | 99.99th=[16319] 00:13:55.812 bw ( KiB/s): min=64096, max=76352, per=50.37%, avg=71456.00, stdev=5262.48, samples=4 00:13:55.812 iops : min= 4006, max= 4772, avg=4466.00, stdev=328.91, samples=4 00:13:55.812 write: IOPS=5205, BW=81.3MiB/s (85.3MB/s)(146MiB/1789msec); 0 zone resets 00:13:55.812 slat (usec): min=31, max=258, avg=36.65, stdev= 8.49 00:13:55.812 clat (usec): min=3276, max=18389, avg=11232.45, stdev=2269.63 00:13:55.812 lat (usec): min=3308, max=18421, avg=11269.10, stdev=2272.01 00:13:55.812 clat percentiles (usec): 00:13:55.812 | 1.00th=[ 6915], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:13:55.812 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:13:55.812 | 70.00th=[12125], 80.00th=[13304], 90.00th=[14615], 95.00th=[15533], 00:13:55.812 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:13:55.812 | 99.99th=[18482] 00:13:55.812 bw ( KiB/s): min=68896, max=78880, per=89.45%, avg=74496.00, stdev=4461.22, samples=4 00:13:55.812 iops : min= 4306, max= 4930, avg=4656.00, stdev=278.83, samples=4 00:13:55.812 lat (msec) : 2=0.01%, 4=1.29%, 10=60.82%, 20=37.88% 00:13:55.812 cpu : usr=81.66%, sys=13.66%, ctx=14, majf=0, minf=14 00:13:55.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:55.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:55.812 issued rwts: total=17794,9312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:55.812 00:13:55.812 Run status group 0 (all jobs): 00:13:55.812 READ: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=278MiB (292MB), run=2007-2007msec 00:13:55.812 WRITE: bw=81.3MiB/s (85.3MB/s), 81.3MiB/s-81.3MiB/s (85.3MB/s-85.3MB/s), io=146MiB (153MB), run=1789-1789msec 00:13:55.812 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.812 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:13:55.812 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:55.812 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:13:55.812 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:13:55.812 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.812 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:56.072 rmmod nvme_tcp 00:13:56.072 rmmod nvme_fabrics 00:13:56.072 rmmod nvme_keyring 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74233 ']' 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74233 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74233 ']' 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74233 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74233 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.072 killing process with pid 74233 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74233' 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74233 00:13:56.072 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74233 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.330 12:49:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:13:56.588 00:13:56.588 real 0m8.512s 00:13:56.588 user 0m33.638s 00:13:56.588 sys 0m2.372s 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:56.588 ************************************ 00:13:56.588 END TEST nvmf_fio_host 00:13:56.588 ************************************ 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.588 12:49:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.589 12:49:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:56.589 ************************************ 00:13:56.589 START TEST nvmf_failover 00:13:56.589 ************************************ 00:13:56.589 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:56.589 * Looking for test storage... 00:13:56.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:56.589 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:56.589 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:13:56.589 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.848 --rc genhtml_branch_coverage=1 00:13:56.848 --rc genhtml_function_coverage=1 00:13:56.848 --rc genhtml_legend=1 00:13:56.848 --rc geninfo_all_blocks=1 00:13:56.848 --rc geninfo_unexecuted_blocks=1 00:13:56.848 00:13:56.848 ' 00:13:56.848 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.848 --rc genhtml_branch_coverage=1 00:13:56.848 --rc genhtml_function_coverage=1 00:13:56.848 --rc genhtml_legend=1 00:13:56.848 --rc geninfo_all_blocks=1 00:13:56.849 --rc geninfo_unexecuted_blocks=1 00:13:56.849 00:13:56.849 ' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.849 --rc genhtml_branch_coverage=1 00:13:56.849 --rc genhtml_function_coverage=1 00:13:56.849 --rc genhtml_legend=1 00:13:56.849 --rc geninfo_all_blocks=1 00:13:56.849 --rc geninfo_unexecuted_blocks=1 00:13:56.849 00:13:56.849 ' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.849 --rc genhtml_branch_coverage=1 00:13:56.849 --rc genhtml_function_coverage=1 00:13:56.849 --rc genhtml_legend=1 00:13:56.849 --rc geninfo_all_blocks=1 00:13:56.849 --rc geninfo_unexecuted_blocks=1 00:13:56.849 00:13:56.849 ' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.849 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:56.849 Cannot find device "nvmf_init_br" 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:56.849 Cannot find device "nvmf_init_br2" 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:56.849 Cannot find device "nvmf_tgt_br" 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.849 Cannot find device "nvmf_tgt_br2" 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:56.849 Cannot find device "nvmf_init_br" 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:56.849 Cannot find device "nvmf_init_br2" 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:13:56.849 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:56.849 Cannot find device "nvmf_tgt_br" 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:56.850 Cannot find device "nvmf_tgt_br2" 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:56.850 Cannot find device "nvmf_br" 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:56.850 Cannot find device "nvmf_init_if" 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:56.850 Cannot find device "nvmf_init_if2" 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:56.850 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:57.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:57.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:57.109 00:13:57.109 --- 10.0.0.3 ping statistics --- 00:13:57.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.109 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:57.109 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:57.109 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:57.109 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:13:57.109 00:13:57.109 --- 10.0.0.4 ping statistics --- 00:13:57.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.109 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:57.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:57.110 00:13:57.110 --- 10.0.0.1 ping statistics --- 00:13:57.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.110 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:57.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:57.110 00:13:57.110 --- 10.0.0.2 ping statistics --- 00:13:57.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.110 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74620 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74620 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74620 ']' 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.110 12:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:57.110 [2024-11-15 12:49:05.754206] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:13:57.110 [2024-11-15 12:49:05.754296] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.369 [2024-11-15 12:49:05.910381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.369 [2024-11-15 12:49:05.950396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.369 [2024-11-15 12:49:05.950676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.369 [2024-11-15 12:49:05.950789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.369 [2024-11-15 12:49:05.950922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.369 [2024-11-15 12:49:05.951038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.369 [2024-11-15 12:49:05.952176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.369 [2024-11-15 12:49:05.952079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.369 [2024-11-15 12:49:05.952168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.369 [2024-11-15 12:49:05.987251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.628 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.628 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:13:57.628 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.628 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.628 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:57.628 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.628 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:57.886 [2024-11-15 12:49:06.365788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.886 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:58.156 Malloc0 00:13:58.156 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:58.429 12:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:58.688 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:58.946 [2024-11-15 12:49:07.498502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:58.946 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:59.205 [2024-11-15 12:49:07.726623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:59.205 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:13:59.464 [2024-11-15 12:49:07.946819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74670 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74670 /var/tmp/bdevperf.sock 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74670 ']' 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.464 12:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:00.401 12:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.401 12:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:00.401 12:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:00.659 NVMe0n1 00:14:00.659 12:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:01.226 00:14:01.226 12:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74699 00:14:01.226 12:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:01.226 12:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:02.161 12:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:02.421 [2024-11-15 12:49:10.926463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.926996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.421 [2024-11-15 12:49:10.927074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 [2024-11-15 12:49:10.927516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67acf0 is same with the state(6) to be set 00:14:02.422 12:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:05.705 12:49:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:05.705 00:14:05.705 12:49:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:05.964 12:49:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:09.265 12:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:09.265 [2024-11-15 12:49:17.837221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:09.265 12:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:10.199 12:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:10.458 [2024-11-15 12:49:19.116590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x679640 is same with the state(6) to be set 00:14:10.458 [2024-11-15 12:49:19.116661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x679640 is same with the state(6) to be set 00:14:10.458 [2024-11-15 12:49:19.116688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x679640 is same with the state(6) to be set 00:14:10.716 12:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74699 00:14:17.288 { 00:14:17.288 "results": [ 00:14:17.288 { 00:14:17.288 "job": "NVMe0n1", 00:14:17.288 "core_mask": "0x1", 00:14:17.288 "workload": "verify", 00:14:17.288 "status": "finished", 00:14:17.288 "verify_range": { 00:14:17.288 "start": 0, 00:14:17.288 "length": 16384 00:14:17.288 }, 00:14:17.288 "queue_depth": 128, 00:14:17.288 "io_size": 4096, 00:14:17.288 "runtime": 15.009034, 00:14:17.288 "iops": 10117.240056888404, 00:14:17.288 "mibps": 39.52046897222033, 00:14:17.288 "io_failed": 3277, 00:14:17.288 "io_timeout": 0, 00:14:17.288 "avg_latency_us": 12355.830085683461, 00:14:17.288 "min_latency_us": 584.6109090909091, 00:14:17.288 "max_latency_us": 16681.890909090907 00:14:17.288 } 00:14:17.288 ], 00:14:17.288 "core_count": 1 00:14:17.288 } 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74670 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74670 ']' 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74670 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74670 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74670' 00:14:17.288 killing process with pid 74670 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74670 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74670 00:14:17.288 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:17.288 [2024-11-15 12:49:08.019204] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:14:17.288 [2024-11-15 12:49:08.019307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74670 ] 00:14:17.288 [2024-11-15 12:49:08.160043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.288 [2024-11-15 12:49:08.190244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.288 [2024-11-15 12:49:08.219022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.288 Running I/O for 15 seconds... 00:14:17.288 7716.00 IOPS, 30.14 MiB/s [2024-11-15T12:49:25.958Z] [2024-11-15 12:49:10.927578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.288 [2024-11-15 12:49:10.927958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.288 [2024-11-15 12:49:10.927996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.928957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.928972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.289 [2024-11-15 12:49:10.929242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.289 [2024-11-15 12:49:10.929257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.929964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.929993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.290 [2024-11-15 12:49:10.930523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.290 [2024-11-15 12:49:10.930537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.930984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.930997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.931024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.931052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.931080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.931107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.931135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.291 [2024-11-15 12:49:10.931566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.291 [2024-11-15 12:49:10.931594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebefc0 is same with the state(6) to be set 00:14:17.291 [2024-11-15 12:49:10.931653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.291 [2024-11-15 12:49:10.931663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.291 [2024-11-15 12:49:10.931677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70816 len:8 PRP1 0x0 PRP2 0x0 00:14:17.291 [2024-11-15 12:49:10.931691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.291 [2024-11-15 12:49:10.931740] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:17.291 [2024-11-15 12:49:10.931802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.292 [2024-11-15 12:49:10.931823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:10.931839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.292 [2024-11-15 12:49:10.931852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:10.931867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.292 [2024-11-15 12:49:10.931880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:10.931894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.292 [2024-11-15 12:49:10.931908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:10.931921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:14:17.292 [2024-11-15 12:49:10.931961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22710 (9): Bad file descriptor 00:14:17.292 [2024-11-15 12:49:10.935515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:17.292 [2024-11-15 12:49:10.961930] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:14:17.292 8645.50 IOPS, 33.77 MiB/s [2024-11-15T12:49:25.962Z] 9262.33 IOPS, 36.18 MiB/s [2024-11-15T12:49:25.962Z] 9561.25 IOPS, 37.35 MiB/s [2024-11-15T12:49:25.962Z] [2024-11-15 12:49:14.562172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.292 [2024-11-15 12:49:14.562948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.562975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.562989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.563002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.563023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.292 [2024-11-15 12:49:14.563037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.292 [2024-11-15 12:49:14.563052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.563065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.563109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.563137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.563166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.563194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.293 [2024-11-15 12:49:14.563923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.563955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.563984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.563999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.293 [2024-11-15 12:49:14.564231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.293 [2024-11-15 12:49:14.564246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.564644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.564985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.565014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.565057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.565086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.565119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.294 [2024-11-15 12:49:14.565148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.294 [2024-11-15 12:49:14.565436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.294 [2024-11-15 12:49:14.565451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:14.565902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.565979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.565993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.566037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.566079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.566107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:14.566141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebf9e0 is same with the state(6) to be set 00:14:17.295 [2024-11-15 12:49:14.566173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.295 [2024-11-15 12:49:14.566187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.295 [2024-11-15 12:49:14.566198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116960 len:8 PRP1 0x0 PRP2 0x0 00:14:17.295 [2024-11-15 12:49:14.566211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566258] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:14:17.295 [2024-11-15 12:49:14.566312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.295 [2024-11-15 12:49:14.566333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.295 [2024-11-15 12:49:14.566361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.295 [2024-11-15 12:49:14.566387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.295 [2024-11-15 12:49:14.566414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:14.566428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:14:17.295 [2024-11-15 12:49:14.569951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:14:17.295 [2024-11-15 12:49:14.569989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22710 (9): Bad file descriptor 00:14:17.295 [2024-11-15 12:49:14.597090] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:14:17.295 9626.60 IOPS, 37.60 MiB/s [2024-11-15T12:49:25.965Z] 9739.50 IOPS, 38.04 MiB/s [2024-11-15T12:49:25.965Z] 9841.86 IOPS, 38.44 MiB/s [2024-11-15T12:49:25.965Z] 9912.62 IOPS, 38.72 MiB/s [2024-11-15T12:49:25.965Z] 9965.00 IOPS, 38.93 MiB/s [2024-11-15T12:49:25.965Z] [2024-11-15 12:49:19.117208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:19.117257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:19.117300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:19.117345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.295 [2024-11-15 12:49:19.117411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:19.117441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:19.117469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:19.117496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:19.117523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:19.117551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.295 [2024-11-15 12:49:19.117578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.295 [2024-11-15 12:49:19.117593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.117937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.117980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.117995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.296 [2024-11-15 12:49:19.118430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.296 [2024-11-15 12:49:19.118884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.296 [2024-11-15 12:49:19.118898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.118913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.118926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.118941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.118954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.118969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.118983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.118998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.297 [2024-11-15 12:49:19.119422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.297 [2024-11-15 12:49:19.119842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.297 [2024-11-15 12:49:19.119854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.119868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.119881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.119895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.119908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.119923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.119935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.119950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.119963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.119977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.119990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.298 [2024-11-15 12:49:19.120325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.120353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.120380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.120407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.120434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.120468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.120496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.298 [2024-11-15 12:49:19.120523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebf6a0 is same with the state(6) to be set 00:14:17.298 [2024-11-15 12:49:19.120553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107792 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108248 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108256 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108264 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108272 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108280 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108288 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108296 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.298 [2024-11-15 12:49:19.120967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.298 [2024-11-15 12:49:19.120977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.298 [2024-11-15 12:49:19.120986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108304 len:8 PRP1 0x0 PRP2 0x0 00:14:17.298 [2024-11-15 12:49:19.120999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108312 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108320 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108328 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108336 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108344 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108352 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108360 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108368 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108376 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108384 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108392 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:17.299 [2024-11-15 12:49:19.121525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:17.299 [2024-11-15 12:49:19.121535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108400 len:8 PRP1 0x0 PRP2 0x0 00:14:17.299 [2024-11-15 12:49:19.121547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121611] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:14:17.299 [2024-11-15 12:49:19.121668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.299 [2024-11-15 12:49:19.121732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.299 [2024-11-15 12:49:19.121763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.299 [2024-11-15 12:49:19.121791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.299 [2024-11-15 12:49:19.121819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.299 [2024-11-15 12:49:19.121834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:14:17.299 [2024-11-15 12:49:19.121868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22710 (9): Bad file descriptor 00:14:17.299 [2024-11-15 12:49:19.125752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:14:17.299 [2024-11-15 12:49:19.147198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:14:17.299 9961.20 IOPS, 38.91 MiB/s [2024-11-15T12:49:25.969Z] 9989.09 IOPS, 39.02 MiB/s [2024-11-15T12:49:25.969Z] 10031.33 IOPS, 39.18 MiB/s [2024-11-15T12:49:25.969Z] 10066.00 IOPS, 39.32 MiB/s [2024-11-15T12:49:25.969Z] 10092.57 IOPS, 39.42 MiB/s [2024-11-15T12:49:25.969Z] 10117.33 IOPS, 39.52 MiB/s 00:14:17.299 Latency(us) 00:14:17.299 [2024-11-15T12:49:25.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.299 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:17.299 Verification LBA range: start 0x0 length 0x4000 00:14:17.299 NVMe0n1 : 15.01 10117.24 39.52 218.34 0.00 12355.83 584.61 16681.89 00:14:17.299 [2024-11-15T12:49:25.969Z] =================================================================================================================== 00:14:17.299 [2024-11-15T12:49:25.969Z] Total : 10117.24 39.52 218.34 0.00 12355.83 584.61 16681.89 00:14:17.299 Received shutdown signal, test time was about 15.000000 seconds 00:14:17.299 00:14:17.299 Latency(us) 00:14:17.299 [2024-11-15T12:49:25.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.299 [2024-11-15T12:49:25.969Z] =================================================================================================================== 00:14:17.299 [2024-11-15T12:49:25.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:17.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74873 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74873 /var/tmp/bdevperf.sock 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74873 ']' 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.299 12:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:17.299 12:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.299 12:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:17.299 12:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:17.299 [2024-11-15 12:49:25.576396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:17.299 12:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:17.299 [2024-11-15 12:49:25.852650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:17.300 12:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:17.558 NVMe0n1 00:14:17.558 12:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:18.124 00:14:18.124 12:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:18.383 00:14:18.383 12:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:18.383 12:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:18.641 12:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:18.899 12:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:22.183 12:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:22.183 12:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:22.184 12:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:22.184 12:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=74942 00:14:22.184 12:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 74942 00:14:23.119 { 00:14:23.119 "results": [ 00:14:23.119 { 00:14:23.119 "job": "NVMe0n1", 00:14:23.119 "core_mask": "0x1", 00:14:23.119 "workload": "verify", 00:14:23.119 "status": "finished", 00:14:23.119 "verify_range": { 00:14:23.119 "start": 0, 00:14:23.119 "length": 16384 00:14:23.119 }, 00:14:23.119 "queue_depth": 128, 00:14:23.119 "io_size": 4096, 00:14:23.119 "runtime": 1.004326, 00:14:23.119 "iops": 7810.213018482047, 00:14:23.119 "mibps": 30.508644603445497, 00:14:23.119 "io_failed": 0, 00:14:23.119 "io_timeout": 0, 00:14:23.119 "avg_latency_us": 16316.538285661307, 00:14:23.119 "min_latency_us": 1035.1709090909092, 00:14:23.119 "max_latency_us": 14537.076363636364 00:14:23.119 } 00:14:23.119 ], 00:14:23.119 "core_count": 1 00:14:23.119 } 00:14:23.119 12:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:23.378 [2024-11-15 12:49:25.047259] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:14:23.378 [2024-11-15 12:49:25.047352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74873 ] 00:14:23.378 [2024-11-15 12:49:25.191503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.378 [2024-11-15 12:49:25.223109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.378 [2024-11-15 12:49:25.251962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.378 [2024-11-15 12:49:27.359844] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:23.378 [2024-11-15 12:49:27.359962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.378 [2024-11-15 12:49:27.360002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.378 [2024-11-15 12:49:27.360019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.378 [2024-11-15 12:49:27.360032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.378 [2024-11-15 12:49:27.360044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.378 [2024-11-15 12:49:27.360057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.378 [2024-11-15 12:49:27.360070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.378 [2024-11-15 12:49:27.360082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.378 [2024-11-15 12:49:27.360094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:14:23.378 [2024-11-15 12:49:27.360140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:14:23.378 [2024-11-15 12:49:27.360169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36710 (9): Bad file descriptor 00:14:23.378 [2024-11-15 12:49:27.362787] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:14:23.378 Running I/O for 1 seconds... 00:14:23.378 7716.00 IOPS, 30.14 MiB/s 00:14:23.378 Latency(us) 00:14:23.378 [2024-11-15T12:49:32.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.378 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:23.378 Verification LBA range: start 0x0 length 0x4000 00:14:23.378 NVMe0n1 : 1.00 7810.21 30.51 0.00 0.00 16316.54 1035.17 14537.08 00:14:23.378 [2024-11-15T12:49:32.048Z] =================================================================================================================== 00:14:23.378 [2024-11-15T12:49:32.048Z] Total : 7810.21 30.51 0.00 0.00 16316.54 1035.17 14537.08 00:14:23.378 12:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:23.378 12:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:23.636 12:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:23.895 12:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:23.895 12:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:24.153 12:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:24.411 12:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:27.694 12:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:27.694 12:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74873 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74873 ']' 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74873 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74873 00:14:27.694 killing process with pid 74873 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74873' 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74873 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74873 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:27.694 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.952 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:27.952 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:27.952 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:27.952 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:27.952 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.211 rmmod nvme_tcp 00:14:28.211 rmmod nvme_fabrics 00:14:28.211 rmmod nvme_keyring 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74620 ']' 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74620 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74620 ']' 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74620 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74620 00:14:28.211 killing process with pid 74620 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74620' 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74620 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74620 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:28.211 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:28.471 12:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:14:28.471 00:14:28.471 real 0m31.987s 00:14:28.471 user 2m4.072s 00:14:28.471 sys 0m5.319s 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.471 ************************************ 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:28.471 END TEST nvmf_failover 00:14:28.471 ************************************ 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:28.471 ************************************ 00:14:28.471 START TEST nvmf_host_discovery 00:14:28.471 ************************************ 00:14:28.471 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:28.731 * Looking for test storage... 00:14:28.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.731 --rc genhtml_branch_coverage=1 00:14:28.731 --rc genhtml_function_coverage=1 00:14:28.731 --rc genhtml_legend=1 00:14:28.731 --rc geninfo_all_blocks=1 00:14:28.731 --rc geninfo_unexecuted_blocks=1 00:14:28.731 00:14:28.731 ' 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.731 --rc genhtml_branch_coverage=1 00:14:28.731 --rc genhtml_function_coverage=1 00:14:28.731 --rc genhtml_legend=1 00:14:28.731 --rc geninfo_all_blocks=1 00:14:28.731 --rc geninfo_unexecuted_blocks=1 00:14:28.731 00:14:28.731 ' 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.731 --rc genhtml_branch_coverage=1 00:14:28.731 --rc genhtml_function_coverage=1 00:14:28.731 --rc genhtml_legend=1 00:14:28.731 --rc geninfo_all_blocks=1 00:14:28.731 --rc geninfo_unexecuted_blocks=1 00:14:28.731 00:14:28.731 ' 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.731 --rc genhtml_branch_coverage=1 00:14:28.731 --rc genhtml_function_coverage=1 00:14:28.731 --rc genhtml_legend=1 00:14:28.731 --rc geninfo_all_blocks=1 00:14:28.731 --rc geninfo_unexecuted_blocks=1 00:14:28.731 00:14:28.731 ' 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.731 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.732 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:28.732 Cannot find device "nvmf_init_br" 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:28.732 Cannot find device "nvmf_init_br2" 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:28.732 Cannot find device "nvmf_tgt_br" 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.732 Cannot find device "nvmf_tgt_br2" 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:14:28.732 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:28.991 Cannot find device "nvmf_init_br" 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:28.991 Cannot find device "nvmf_init_br2" 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:28.991 Cannot find device "nvmf_tgt_br" 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:28.991 Cannot find device "nvmf_tgt_br2" 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:28.991 Cannot find device "nvmf_br" 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:28.991 Cannot find device "nvmf_init_if" 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:28.991 Cannot find device "nvmf_init_if2" 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.991 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:29.338 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:29.338 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:14:29.338 00:14:29.338 --- 10.0.0.3 ping statistics --- 00:14:29.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.338 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:29.338 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:29.338 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:14:29.338 00:14:29.338 --- 10.0.0.4 ping statistics --- 00:14:29.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.338 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:29.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:29.338 00:14:29.338 --- 10.0.0.1 ping statistics --- 00:14:29.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.338 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:29.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:29.338 00:14:29.338 --- 10.0.0.2 ping statistics --- 00:14:29.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.338 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75267 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75267 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75267 ']' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.338 12:49:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.338 [2024-11-15 12:49:37.786951] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:14:29.338 [2024-11-15 12:49:37.787034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.338 [2024-11-15 12:49:37.934473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.338 [2024-11-15 12:49:37.964821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.338 [2024-11-15 12:49:37.964871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.338 [2024-11-15 12:49:37.964898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.338 [2024-11-15 12:49:37.964905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.338 [2024-11-15 12:49:37.964912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.338 [2024-11-15 12:49:37.965205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.338 [2024-11-15 12:49:37.992519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.619 [2024-11-15 12:49:38.098482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.619 [2024-11-15 12:49:38.106562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.619 null0 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.619 null1 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75293 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75293 /tmp/host.sock 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75293 ']' 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.619 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.619 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.619 [2024-11-15 12:49:38.194960] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:14:29.619 [2024-11-15 12:49:38.195070] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75293 ] 00:14:29.893 [2024-11-15 12:49:38.348480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.893 [2024-11-15 12:49:38.387205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.893 [2024-11-15 12:49:38.420475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:29.893 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:30.153 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.412 [2024-11-15 12:49:38.842769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.412 12:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:30.412 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.413 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:14:30.413 12:49:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:14:30.979 [2024-11-15 12:49:39.491207] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:30.979 [2024-11-15 12:49:39.491252] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:30.979 [2024-11-15 12:49:39.491273] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:30.979 [2024-11-15 12:49:39.497253] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:14:30.979 [2024-11-15 12:49:39.551587] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:14:30.979 [2024-11-15 12:49:39.552499] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24a0e50:1 started. 00:14:30.979 [2024-11-15 12:49:39.554165] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:30.979 [2024-11-15 12:49:39.554204] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:30.979 [2024-11-15 12:49:39.559830] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24a0e50 was disconnected and freed. delete nvme_qpair. 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:31.546 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:31.547 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:31.806 [2024-11-15 12:49:40.313253] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24aef80:1 started. 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:31.806 [2024-11-15 12:49:40.320421] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24aef80 was disconnected and freed. delete nvme_qpair. 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.806 [2024-11-15 12:49:40.427919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:31.806 [2024-11-15 12:49:40.428292] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:31.806 [2024-11-15 12:49:40.428316] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:31.806 [2024-11-15 12:49:40.434315] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:31.806 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:32.066 [2024-11-15 12:49:40.494792] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:14:32.066 [2024-11-15 12:49:40.494839] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:32.066 [2024-11-15 12:49:40.494851] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:32.066 [2024-11-15 12:49:40.494857] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.066 [2024-11-15 12:49:40.636548] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:32.066 [2024-11-15 12:49:40.636575] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:32.066 [2024-11-15 12:49:40.641786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.066 [2024-11-15 12:49:40.641821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.066 [2024-11-15 12:49:40.641835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.066 [2024-11-15 12:49:40.641859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.066 [2024-11-15 12:49:40.641872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.066 [2024-11-15 12:49:40.641895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.066 [2024-11-15 12:49:40.641920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:32.066 [2024-11-15 12:49:40.641945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:32.066 [2024-11-15 12:49:40.641970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247d230 is same with the state(6) to be set 00:14:32.066 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:32.066 [2024-11-15 12:49:40.642571] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:14:32.066 [2024-11-15 12:49:40.642623] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:32.067 [2024-11-15 12:49:40.642689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247d230 (9): Bad file descriptor 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:32.067 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.326 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.327 12:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.586 12:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.521 [2024-11-15 12:49:42.052124] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:33.521 [2024-11-15 12:49:42.052144] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:33.522 [2024-11-15 12:49:42.052161] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:33.522 [2024-11-15 12:49:42.058152] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:14:33.522 [2024-11-15 12:49:42.116567] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:14:33.522 [2024-11-15 12:49:42.117344] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2475eb0:1 started. 00:14:33.522 [2024-11-15 12:49:42.119065] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:33.522 [2024-11-15 12:49:42.119098] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:33.522 [2024-11-15 12:49:42.120888] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2475eb0 was disconnected and freed. delete nvme_qpair. 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.522 request: 00:14:33.522 { 00:14:33.522 "name": "nvme", 00:14:33.522 "trtype": "tcp", 00:14:33.522 "traddr": "10.0.0.3", 00:14:33.522 "adrfam": "ipv4", 00:14:33.522 "trsvcid": "8009", 00:14:33.522 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:33.522 "wait_for_attach": true, 00:14:33.522 "method": "bdev_nvme_start_discovery", 00:14:33.522 "req_id": 1 00:14:33.522 } 00:14:33.522 Got JSON-RPC error response 00:14:33.522 response: 00:14:33.522 { 00:14:33.522 "code": -17, 00:14:33.522 "message": "File exists" 00:14:33.522 } 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:33.522 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.781 request: 00:14:33.781 { 00:14:33.781 "name": "nvme_second", 00:14:33.781 "trtype": "tcp", 00:14:33.781 "traddr": "10.0.0.3", 00:14:33.781 "adrfam": "ipv4", 00:14:33.781 "trsvcid": "8009", 00:14:33.781 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:33.781 "wait_for_attach": true, 00:14:33.781 "method": "bdev_nvme_start_discovery", 00:14:33.781 "req_id": 1 00:14:33.781 } 00:14:33.781 Got JSON-RPC error response 00:14:33.781 response: 00:14:33.781 { 00:14:33.781 "code": -17, 00:14:33.781 "message": "File exists" 00:14:33.781 } 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.781 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:33.782 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.782 12:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:35.159 [2024-11-15 12:49:43.387426] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:35.159 [2024-11-15 12:49:43.387488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a1e40 with addr=10.0.0.3, port=8010 00:14:35.159 [2024-11-15 12:49:43.387506] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:35.159 [2024-11-15 12:49:43.387514] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:35.159 [2024-11-15 12:49:43.387522] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:35.726 [2024-11-15 12:49:44.387374] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:35.726 [2024-11-15 12:49:44.387427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a1e40 with addr=10.0.0.3, port=8010 00:14:35.726 [2024-11-15 12:49:44.387442] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:35.726 [2024-11-15 12:49:44.387450] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:35.726 [2024-11-15 12:49:44.387457] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:37.102 [2024-11-15 12:49:45.387305] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:14:37.102 request: 00:14:37.102 { 00:14:37.102 "name": "nvme_second", 00:14:37.102 "trtype": "tcp", 00:14:37.102 "traddr": "10.0.0.3", 00:14:37.102 "adrfam": "ipv4", 00:14:37.102 "trsvcid": "8010", 00:14:37.102 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:37.102 "wait_for_attach": false, 00:14:37.102 "attach_timeout_ms": 3000, 00:14:37.102 "method": "bdev_nvme_start_discovery", 00:14:37.102 "req_id": 1 00:14:37.102 } 00:14:37.102 Got JSON-RPC error response 00:14:37.102 response: 00:14:37.102 { 00:14:37.102 "code": -110, 00:14:37.102 "message": "Connection timed out" 00:14:37.102 } 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75293 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.102 rmmod nvme_tcp 00:14:37.102 rmmod nvme_fabrics 00:14:37.102 rmmod nvme_keyring 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75267 ']' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75267 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75267 ']' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75267 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75267 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:37.102 killing process with pid 75267 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75267' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75267 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75267 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:37.102 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:37.360 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:14:37.361 00:14:37.361 real 0m8.844s 00:14:37.361 user 0m16.950s 00:14:37.361 sys 0m1.800s 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.361 ************************************ 00:14:37.361 END TEST nvmf_host_discovery 00:14:37.361 ************************************ 00:14:37.361 12:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.361 12:49:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:37.361 12:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:37.361 12:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.361 12:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:37.361 ************************************ 00:14:37.361 START TEST nvmf_host_multipath_status 00:14:37.361 ************************************ 00:14:37.361 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:37.620 * Looking for test storage... 00:14:37.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.621 --rc genhtml_branch_coverage=1 00:14:37.621 --rc genhtml_function_coverage=1 00:14:37.621 --rc genhtml_legend=1 00:14:37.621 --rc geninfo_all_blocks=1 00:14:37.621 --rc geninfo_unexecuted_blocks=1 00:14:37.621 00:14:37.621 ' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.621 --rc genhtml_branch_coverage=1 00:14:37.621 --rc genhtml_function_coverage=1 00:14:37.621 --rc genhtml_legend=1 00:14:37.621 --rc geninfo_all_blocks=1 00:14:37.621 --rc geninfo_unexecuted_blocks=1 00:14:37.621 00:14:37.621 ' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.621 --rc genhtml_branch_coverage=1 00:14:37.621 --rc genhtml_function_coverage=1 00:14:37.621 --rc genhtml_legend=1 00:14:37.621 --rc geninfo_all_blocks=1 00:14:37.621 --rc geninfo_unexecuted_blocks=1 00:14:37.621 00:14:37.621 ' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.621 --rc genhtml_branch_coverage=1 00:14:37.621 --rc genhtml_function_coverage=1 00:14:37.621 --rc genhtml_legend=1 00:14:37.621 --rc geninfo_all_blocks=1 00:14:37.621 --rc geninfo_unexecuted_blocks=1 00:14:37.621 00:14:37.621 ' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.621 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.621 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:37.622 Cannot find device "nvmf_init_br" 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:37.622 Cannot find device "nvmf_init_br2" 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:37.622 Cannot find device "nvmf_tgt_br" 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:14:37.622 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.881 Cannot find device "nvmf_tgt_br2" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:37.881 Cannot find device "nvmf_init_br" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:37.881 Cannot find device "nvmf_init_br2" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:37.881 Cannot find device "nvmf_tgt_br" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:37.881 Cannot find device "nvmf_tgt_br2" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:37.881 Cannot find device "nvmf_br" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:37.881 Cannot find device "nvmf_init_if" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:37.881 Cannot find device "nvmf_init_if2" 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:37.881 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:38.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:38.141 00:14:38.141 --- 10.0.0.3 ping statistics --- 00:14:38.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.141 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:38.141 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:38.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:14:38.141 00:14:38.141 --- 10.0.0.4 ping statistics --- 00:14:38.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.141 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:38.141 00:14:38.141 --- 10.0.0.1 ping statistics --- 00:14:38.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.141 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:38.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:38.141 00:14:38.141 --- 10.0.0.2 ping statistics --- 00:14:38.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.141 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:38.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=75782 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 75782 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75782 ']' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.141 12:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:38.141 [2024-11-15 12:49:46.697567] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:14:38.141 [2024-11-15 12:49:46.697710] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.400 [2024-11-15 12:49:46.850802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:38.400 [2024-11-15 12:49:46.887573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.400 [2024-11-15 12:49:46.887649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.400 [2024-11-15 12:49:46.887664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.400 [2024-11-15 12:49:46.887674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.400 [2024-11-15 12:49:46.887683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.400 [2024-11-15 12:49:46.888547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.400 [2024-11-15 12:49:46.888562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.400 [2024-11-15 12:49:46.921486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75782 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:39.337 [2024-11-15 12:49:47.948057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.337 12:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:39.595 Malloc0 00:14:39.595 12:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:40.162 12:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:40.162 12:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:40.421 [2024-11-15 12:49:48.958990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:40.421 12:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:40.680 [2024-11-15 12:49:49.227157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:40.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75836 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75836 /var/tmp/bdevperf.sock 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75836 ']' 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.680 12:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:41.616 12:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.616 12:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:41.616 12:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:41.875 12:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:42.442 Nvme0n1 00:14:42.442 12:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:42.701 Nvme0n1 00:14:42.701 12:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:42.701 12:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:14:44.605 12:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:14:44.605 12:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:44.864 12:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:45.123 12:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:14:46.059 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:14:46.059 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:46.059 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.059 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:46.317 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.317 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:46.317 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.317 12:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.883 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:47.142 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:47.142 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:47.142 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:47.142 12:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:47.401 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:47.401 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:47.401 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:47.401 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:47.660 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:47.660 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:14:47.660 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:47.919 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:48.177 12:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:14:49.554 12:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:14:49.554 12:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:49.554 12:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.554 12:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:49.554 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:49.554 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:49.554 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:49.554 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.813 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.813 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:49.813 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:49.813 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:50.071 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:50.071 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:50.071 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:50.071 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:50.330 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:50.330 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:50.331 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:50.331 12:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:50.590 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:50.590 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:50.590 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:50.590 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:50.849 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:50.849 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:14:50.849 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:51.108 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:14:51.367 12:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:14:52.302 12:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:14:52.302 12:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:52.302 12:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:52.302 12:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:52.560 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:52.560 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:52.560 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:52.560 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:52.818 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:52.818 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:52.818 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:52.818 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:53.076 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:53.076 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:53.076 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:53.076 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:53.335 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:53.335 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:53.335 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:53.335 12:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:53.901 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:53.901 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:53.901 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:53.901 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:53.901 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:53.901 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:14:53.902 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:54.469 12:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:54.469 12:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:55.847 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:56.106 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:56.106 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:56.106 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.106 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:56.364 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:56.364 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:56.364 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:56.364 12:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.623 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:56.623 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:56.623 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.623 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:56.882 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:56.882 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:56.882 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.882 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:57.141 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:57.141 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:14:57.141 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:57.399 12:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:57.657 12:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:14:58.594 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:14:58.594 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:58.594 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:58.594 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.161 12:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.729 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:59.987 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:59.987 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:59.987 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.987 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:00.247 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:00.247 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:00.247 12:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:00.505 12:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:00.764 12:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:01.700 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:01.700 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:01.700 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.700 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.268 12:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:02.527 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.527 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:02.527 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.527 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:02.786 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.786 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:02.786 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.786 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:03.045 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:03.045 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:03.314 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.314 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:03.314 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.314 12:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:03.625 12:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:03.625 12:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:03.909 12:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:04.187 12:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:05.124 12:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:05.124 12:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:05.124 12:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.124 12:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:05.383 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:05.383 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:05.383 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.383 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:05.641 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:05.641 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:05.641 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.641 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:05.900 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:05.900 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:05.900 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:05.900 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:06.468 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.468 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:06.468 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:06.468 12:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.468 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.468 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:06.468 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.468 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:06.727 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.727 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:06.727 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:06.985 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:07.244 12:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:08.622 12:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:08.622 12:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:08.622 12:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.622 12:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:08.622 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:08.622 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:08.622 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.622 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:08.881 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:08.881 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:08.881 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:08.881 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:09.141 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.141 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:09.141 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.141 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:09.399 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.399 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:09.399 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.399 12:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:09.658 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.658 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:09.658 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.658 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:09.917 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.917 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:09.917 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:10.177 12:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:10.435 12:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:11.814 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:12.073 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.073 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:12.073 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.073 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:12.332 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.333 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:12.333 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:12.333 12:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.591 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.591 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:12.591 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.591 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:12.850 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:12.850 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:12.850 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.850 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:13.108 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:13.108 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:13.108 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:13.367 12:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:13.626 12:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:14.562 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:14.562 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:14.562 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.562 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:14.821 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:14.821 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:14.821 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.821 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:15.081 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:15.081 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:15.081 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.081 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:15.340 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.340 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:15.340 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.340 12:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:15.599 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.599 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:15.599 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.599 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:15.857 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:15.857 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:15.857 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.857 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75836 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75836 ']' 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75836 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75836 00:15:16.117 killing process with pid 75836 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75836' 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75836 00:15:16.117 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75836 00:15:16.117 { 00:15:16.117 "results": [ 00:15:16.117 { 00:15:16.117 "job": "Nvme0n1", 00:15:16.117 "core_mask": "0x4", 00:15:16.117 "workload": "verify", 00:15:16.117 "status": "terminated", 00:15:16.117 "verify_range": { 00:15:16.117 "start": 0, 00:15:16.117 "length": 16384 00:15:16.117 }, 00:15:16.117 "queue_depth": 128, 00:15:16.117 "io_size": 4096, 00:15:16.117 "runtime": 33.39029, 00:15:16.117 "iops": 9475.928481004508, 00:15:16.117 "mibps": 37.01534562892386, 00:15:16.117 "io_failed": 0, 00:15:16.117 "io_timeout": 0, 00:15:16.117 "avg_latency_us": 13480.219142247368, 00:15:16.117 "min_latency_us": 655.36, 00:15:16.117 "max_latency_us": 4026531.84 00:15:16.117 } 00:15:16.117 ], 00:15:16.117 "core_count": 1 00:15:16.117 } 00:15:16.379 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75836 00:15:16.379 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:16.379 [2024-11-15 12:49:49.301297] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:15:16.379 [2024-11-15 12:49:49.301397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75836 ] 00:15:16.379 [2024-11-15 12:49:49.454355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.379 [2024-11-15 12:49:49.493417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.379 [2024-11-15 12:49:49.527239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.379 Running I/O for 90 seconds... 00:15:16.379 7957.00 IOPS, 31.08 MiB/s [2024-11-15T12:50:25.049Z] 7946.50 IOPS, 31.04 MiB/s [2024-11-15T12:50:25.049Z] 7943.00 IOPS, 31.03 MiB/s [2024-11-15T12:50:25.049Z] 7913.00 IOPS, 30.91 MiB/s [2024-11-15T12:50:25.049Z] 7889.00 IOPS, 30.82 MiB/s [2024-11-15T12:50:25.049Z] 8235.50 IOPS, 32.17 MiB/s [2024-11-15T12:50:25.049Z] 8568.57 IOPS, 33.47 MiB/s [2024-11-15T12:50:25.049Z] 8802.25 IOPS, 34.38 MiB/s [2024-11-15T12:50:25.049Z] 9003.44 IOPS, 35.17 MiB/s [2024-11-15T12:50:25.049Z] 9167.00 IOPS, 35.81 MiB/s [2024-11-15T12:50:25.049Z] 9280.36 IOPS, 36.25 MiB/s [2024-11-15T12:50:25.049Z] 9385.00 IOPS, 36.66 MiB/s [2024-11-15T12:50:25.049Z] 9496.92 IOPS, 37.10 MiB/s [2024-11-15T12:50:25.049Z] 9576.29 IOPS, 37.41 MiB/s [2024-11-15T12:50:25.049Z] [2024-11-15 12:50:05.948430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.379 [2024-11-15 12:50:05.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:16.379 [2024-11-15 12:50:05.948563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.379 [2024-11-15 12:50:05.948584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:16.379 [2024-11-15 12:50:05.948605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.379 [2024-11-15 12:50:05.948634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:16.379 [2024-11-15 12:50:05.948656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.379 [2024-11-15 12:50:05.948670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:16.379 [2024-11-15 12:50:05.948689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.379 [2024-11-15 12:50:05.948702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:16.379 [2024-11-15 12:50:05.948722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.379 [2024-11-15 12:50:05.948735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:16.379 [2024-11-15 12:50:05.948753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.379 [2024-11-15 12:50:05.948766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:16.379 [2024-11-15 12:50:05.948785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.948798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.948817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.948830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.948874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.948890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.948909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.948923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.948941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.948954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.948973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.948986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.380 [2024-11-15 12:50:05.949750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.949974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.949989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:16.380 [2024-11-15 12:50:05.950040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.380 [2024-11-15 12:50:05.950069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.950385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.950966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.950980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.381 [2024-11-15 12:50:05.951534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.951572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:16.381 [2024-11-15 12:50:05.951592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.381 [2024-11-15 12:50:05.951605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.951974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.951994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.952015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.952078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.952111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.952151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.952185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.952715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.952730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:05.953452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.953966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.953981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.954022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.954037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.954063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.954092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.954117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.954131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.954156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.954170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.954196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.954210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:05.954246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:05.954262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:16.382 9393.80 IOPS, 36.69 MiB/s [2024-11-15T12:50:25.052Z] 8806.69 IOPS, 34.40 MiB/s [2024-11-15T12:50:25.052Z] 8288.65 IOPS, 32.38 MiB/s [2024-11-15T12:50:25.052Z] 7828.17 IOPS, 30.58 MiB/s [2024-11-15T12:50:25.052Z] 7614.74 IOPS, 29.75 MiB/s [2024-11-15T12:50:25.052Z] 7753.15 IOPS, 30.29 MiB/s [2024-11-15T12:50:25.052Z] 7881.10 IOPS, 30.79 MiB/s [2024-11-15T12:50:25.052Z] 8110.77 IOPS, 31.68 MiB/s [2024-11-15T12:50:25.052Z] 8371.96 IOPS, 32.70 MiB/s [2024-11-15T12:50:25.052Z] 8599.50 IOPS, 33.59 MiB/s [2024-11-15T12:50:25.052Z] 8725.16 IOPS, 34.08 MiB/s [2024-11-15T12:50:25.052Z] 8794.19 IOPS, 34.35 MiB/s [2024-11-15T12:50:25.052Z] 8851.00 IOPS, 34.57 MiB/s [2024-11-15T12:50:25.052Z] 8937.89 IOPS, 34.91 MiB/s [2024-11-15T12:50:25.052Z] 9118.10 IOPS, 35.62 MiB/s [2024-11-15T12:50:25.052Z] 9270.13 IOPS, 36.21 MiB/s [2024-11-15T12:50:25.052Z] [2024-11-15 12:50:22.150519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:22.150579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:22.150657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.382 [2024-11-15 12:50:22.150697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:22.150723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:22.150737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:16.382 [2024-11-15 12:50:22.150756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.382 [2024-11-15 12:50:22.150769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.150788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.150801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.150820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.150833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.150851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.150864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.150883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.150896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.150914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.150927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.150946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.150959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.150978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.150991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.383 [2024-11-15 12:50:22.151694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.383 [2024-11-15 12:50:22.151891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:16.383 [2024-11-15 12:50:22.151910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.151924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.151943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.151965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.151985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.151999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.152803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.152866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.152880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.154388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.154430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.154464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.384 [2024-11-15 12:50:22.154496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.154529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.154562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.154594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.154627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:16.384 [2024-11-15 12:50:22.154660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.384 [2024-11-15 12:50:22.154678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:16.384 9402.97 IOPS, 36.73 MiB/s [2024-11-15T12:50:25.054Z] 9440.38 IOPS, 36.88 MiB/s [2024-11-15T12:50:25.054Z] 9466.55 IOPS, 36.98 MiB/s [2024-11-15T12:50:25.054Z] Received shutdown signal, test time was about 33.391049 seconds 00:15:16.384 00:15:16.384 Latency(us) 00:15:16.384 [2024-11-15T12:50:25.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.384 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:16.384 Verification LBA range: start 0x0 length 0x4000 00:15:16.385 Nvme0n1 : 33.39 9475.93 37.02 0.00 0.00 13480.22 655.36 4026531.84 00:15:16.385 [2024-11-15T12:50:25.055Z] =================================================================================================================== 00:15:16.385 [2024-11-15T12:50:25.055Z] Total : 9475.93 37.02 0.00 0.00 13480.22 655.36 4026531.84 00:15:16.385 12:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.644 rmmod nvme_tcp 00:15:16.644 rmmod nvme_fabrics 00:15:16.644 rmmod nvme_keyring 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 75782 ']' 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 75782 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75782 ']' 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75782 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75782 00:15:16.644 killing process with pid 75782 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75782' 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75782 00:15:16.644 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75782 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:16.903 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:16.904 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:15:17.163 00:15:17.163 real 0m39.639s 00:15:17.163 user 2m7.561s 00:15:17.163 sys 0m11.452s 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 ************************************ 00:15:17.163 END TEST nvmf_host_multipath_status 00:15:17.163 ************************************ 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 ************************************ 00:15:17.163 START TEST nvmf_discovery_remove_ifc 00:15:17.163 ************************************ 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:17.163 * Looking for test storage... 00:15:17.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:17.163 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:15:17.423 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.424 --rc genhtml_branch_coverage=1 00:15:17.424 --rc genhtml_function_coverage=1 00:15:17.424 --rc genhtml_legend=1 00:15:17.424 --rc geninfo_all_blocks=1 00:15:17.424 --rc geninfo_unexecuted_blocks=1 00:15:17.424 00:15:17.424 ' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.424 --rc genhtml_branch_coverage=1 00:15:17.424 --rc genhtml_function_coverage=1 00:15:17.424 --rc genhtml_legend=1 00:15:17.424 --rc geninfo_all_blocks=1 00:15:17.424 --rc geninfo_unexecuted_blocks=1 00:15:17.424 00:15:17.424 ' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.424 --rc genhtml_branch_coverage=1 00:15:17.424 --rc genhtml_function_coverage=1 00:15:17.424 --rc genhtml_legend=1 00:15:17.424 --rc geninfo_all_blocks=1 00:15:17.424 --rc geninfo_unexecuted_blocks=1 00:15:17.424 00:15:17.424 ' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:17.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.424 --rc genhtml_branch_coverage=1 00:15:17.424 --rc genhtml_function_coverage=1 00:15:17.424 --rc genhtml_legend=1 00:15:17.424 --rc geninfo_all_blocks=1 00:15:17.424 --rc geninfo_unexecuted_blocks=1 00:15:17.424 00:15:17.424 ' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:17.424 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:17.425 Cannot find device "nvmf_init_br" 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:17.425 Cannot find device "nvmf_init_br2" 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:17.425 Cannot find device "nvmf_tgt_br" 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.425 Cannot find device "nvmf_tgt_br2" 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:15:17.425 12:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:17.425 Cannot find device "nvmf_init_br" 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:17.425 Cannot find device "nvmf_init_br2" 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:17.425 Cannot find device "nvmf_tgt_br" 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:17.425 Cannot find device "nvmf_tgt_br2" 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:17.425 Cannot find device "nvmf_br" 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:17.425 Cannot find device "nvmf_init_if" 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:17.425 Cannot find device "nvmf_init_if2" 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.425 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:17.684 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:17.685 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:17.944 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.944 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:15:17.944 00:15:17.944 --- 10.0.0.3 ping statistics --- 00:15:17.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.944 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:17.944 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:17.944 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:15:17.944 00:15:17.944 --- 10.0.0.4 ping statistics --- 00:15:17.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.944 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:17.944 00:15:17.944 --- 10.0.0.1 ping statistics --- 00:15:17.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.944 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:17.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:17.944 00:15:17.944 --- 10.0.0.2 ping statistics --- 00:15:17.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.944 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.944 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=76679 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 76679 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76679 ']' 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.945 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:17.945 [2024-11-15 12:50:26.459308] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:15:17.945 [2024-11-15 12:50:26.459394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.945 [2024-11-15 12:50:26.605966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.204 [2024-11-15 12:50:26.633815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.204 [2024-11-15 12:50:26.633881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.204 [2024-11-15 12:50:26.633907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.204 [2024-11-15 12:50:26.633915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.204 [2024-11-15 12:50:26.633921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.204 [2024-11-15 12:50:26.634280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.204 [2024-11-15 12:50:26.660835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.204 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.204 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:18.204 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:18.204 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.204 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:18.205 [2024-11-15 12:50:26.773342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.205 [2024-11-15 12:50:26.781452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:18.205 null0 00:15:18.205 [2024-11-15 12:50:26.813376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76698 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76698 /tmp/host.sock 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76698 ']' 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.205 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.205 12:50:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:18.464 [2024-11-15 12:50:26.893787] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:15:18.464 [2024-11-15 12:50:26.893894] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76698 ] 00:15:18.464 [2024-11-15 12:50:27.046102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.464 [2024-11-15 12:50:27.084786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:18.723 [2024-11-15 12:50:27.185841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.723 12:50:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:19.662 [2024-11-15 12:50:28.226289] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:19.662 [2024-11-15 12:50:28.226334] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:19.662 [2024-11-15 12:50:28.226354] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:19.662 [2024-11-15 12:50:28.232325] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:19.662 [2024-11-15 12:50:28.286766] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:19.662 [2024-11-15 12:50:28.287685] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x190efb0:1 started. 00:15:19.662 [2024-11-15 12:50:28.289279] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:19.662 [2024-11-15 12:50:28.289348] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:19.662 [2024-11-15 12:50:28.289370] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:19.662 [2024-11-15 12:50:28.289384] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:19.662 [2024-11-15 12:50:28.289403] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:19.662 [2024-11-15 12:50:28.294963] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x190efb0 was disconnected and freed. delete nvme_qpair. 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:19.662 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:19.921 12:50:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:20.859 12:50:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:22.237 12:50:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:23.175 12:50:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:24.114 12:50:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:25.052 12:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:25.052 [2024-11-15 12:50:33.717338] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:25.052 [2024-11-15 12:50:33.717387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.052 [2024-11-15 12:50:33.717401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.052 [2024-11-15 12:50:33.717411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.052 [2024-11-15 12:50:33.717419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.052 [2024-11-15 12:50:33.717427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.052 [2024-11-15 12:50:33.717436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.052 [2024-11-15 12:50:33.717444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.052 [2024-11-15 12:50:33.717452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.052 [2024-11-15 12:50:33.717460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.052 [2024-11-15 12:50:33.717467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.052 [2024-11-15 12:50:33.717475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb240 is same with the state(6) to be set 00:15:25.311 [2024-11-15 12:50:33.727334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb240 (9): Bad file descriptor 00:15:25.311 [2024-11-15 12:50:33.737350] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:15:25.311 [2024-11-15 12:50:33.737387] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:15:25.311 [2024-11-15 12:50:33.737395] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:25.311 [2024-11-15 12:50:33.737401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:25.311 [2024-11-15 12:50:33.737431] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:26.261 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:26.261 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.261 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:26.261 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.261 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:26.261 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:26.261 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:26.261 [2024-11-15 12:50:34.742748] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:26.261 [2024-11-15 12:50:34.742852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eb240 with addr=10.0.0.3, port=4420 00:15:26.261 [2024-11-15 12:50:34.742895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb240 is same with the state(6) to be set 00:15:26.261 [2024-11-15 12:50:34.742959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb240 (9): Bad file descriptor 00:15:26.262 [2024-11-15 12:50:34.743845] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:15:26.262 [2024-11-15 12:50:34.743934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:26.262 [2024-11-15 12:50:34.743971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:26.262 [2024-11-15 12:50:34.743993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:26.262 [2024-11-15 12:50:34.744015] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:26.262 [2024-11-15 12:50:34.744028] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:26.262 [2024-11-15 12:50:34.744039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:26.262 [2024-11-15 12:50:34.744060] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:26.262 [2024-11-15 12:50:34.744072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:26.262 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.262 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:26.262 12:50:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:27.197 [2024-11-15 12:50:35.744123] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:27.197 [2024-11-15 12:50:35.744165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:27.197 [2024-11-15 12:50:35.744185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:27.197 [2024-11-15 12:50:35.744210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:27.197 [2024-11-15 12:50:35.744219] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:15:27.197 [2024-11-15 12:50:35.744227] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:27.197 [2024-11-15 12:50:35.744232] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:27.197 [2024-11-15 12:50:35.744237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:27.197 [2024-11-15 12:50:35.744264] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:15:27.197 [2024-11-15 12:50:35.744295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.197 [2024-11-15 12:50:35.744308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.197 [2024-11-15 12:50:35.744320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.197 [2024-11-15 12:50:35.744328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.197 [2024-11-15 12:50:35.744338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.197 [2024-11-15 12:50:35.744345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.197 [2024-11-15 12:50:35.744353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.197 [2024-11-15 12:50:35.744361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.198 [2024-11-15 12:50:35.744370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.198 [2024-11-15 12:50:35.744377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.198 [2024-11-15 12:50:35.744385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:15:27.198 [2024-11-15 12:50:35.745097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1876a20 (9): Bad file descriptor 00:15:27.198 [2024-11-15 12:50:35.746110] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:27.198 [2024-11-15 12:50:35.746132] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:27.198 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.456 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:27.456 12:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:28.393 12:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:29.405 [2024-11-15 12:50:37.749919] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:29.405 [2024-11-15 12:50:37.749945] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:29.406 [2024-11-15 12:50:37.749979] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:29.406 [2024-11-15 12:50:37.755955] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:15:29.406 [2024-11-15 12:50:37.810259] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:15:29.406 [2024-11-15 12:50:37.811016] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1917290:1 started. 00:15:29.406 [2024-11-15 12:50:37.812136] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:29.406 [2024-11-15 12:50:37.812191] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:29.406 [2024-11-15 12:50:37.812212] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:29.406 [2024-11-15 12:50:37.812226] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:15:29.406 [2024-11-15 12:50:37.812234] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:29.406 [2024-11-15 12:50:37.818718] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1917290 was disconnected and freed. delete nvme_qpair. 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:29.406 12:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76698 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76698 ']' 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76698 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76698 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.406 killing process with pid 76698 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76698' 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76698 00:15:29.406 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76698 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:29.668 rmmod nvme_tcp 00:15:29.668 rmmod nvme_fabrics 00:15:29.668 rmmod nvme_keyring 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 76679 ']' 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 76679 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76679 ']' 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76679 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76679 00:15:29.668 killing process with pid 76679 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76679' 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76679 00:15:29.668 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76679 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:29.927 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:15:30.186 00:15:30.186 real 0m12.974s 00:15:30.186 user 0m22.147s 00:15:30.186 sys 0m2.312s 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:30.186 ************************************ 00:15:30.186 END TEST nvmf_discovery_remove_ifc 00:15:30.186 ************************************ 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.186 12:50:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.186 ************************************ 00:15:30.186 START TEST nvmf_identify_kernel_target 00:15:30.186 ************************************ 00:15:30.187 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:30.187 * Looking for test storage... 00:15:30.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:30.187 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:30.187 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:30.187 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.447 --rc genhtml_branch_coverage=1 00:15:30.447 --rc genhtml_function_coverage=1 00:15:30.447 --rc genhtml_legend=1 00:15:30.447 --rc geninfo_all_blocks=1 00:15:30.447 --rc geninfo_unexecuted_blocks=1 00:15:30.447 00:15:30.447 ' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.447 --rc genhtml_branch_coverage=1 00:15:30.447 --rc genhtml_function_coverage=1 00:15:30.447 --rc genhtml_legend=1 00:15:30.447 --rc geninfo_all_blocks=1 00:15:30.447 --rc geninfo_unexecuted_blocks=1 00:15:30.447 00:15:30.447 ' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.447 --rc genhtml_branch_coverage=1 00:15:30.447 --rc genhtml_function_coverage=1 00:15:30.447 --rc genhtml_legend=1 00:15:30.447 --rc geninfo_all_blocks=1 00:15:30.447 --rc geninfo_unexecuted_blocks=1 00:15:30.447 00:15:30.447 ' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.447 --rc genhtml_branch_coverage=1 00:15:30.447 --rc genhtml_function_coverage=1 00:15:30.447 --rc genhtml_legend=1 00:15:30.447 --rc geninfo_all_blocks=1 00:15:30.447 --rc geninfo_unexecuted_blocks=1 00:15:30.447 00:15:30.447 ' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.447 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.448 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:30.448 Cannot find device "nvmf_init_br" 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:30.448 Cannot find device "nvmf_init_br2" 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:30.448 Cannot find device "nvmf_tgt_br" 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:15:30.448 12:50:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.448 Cannot find device "nvmf_tgt_br2" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:30.448 Cannot find device "nvmf_init_br" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:30.448 Cannot find device "nvmf_init_br2" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:30.448 Cannot find device "nvmf_tgt_br" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:30.448 Cannot find device "nvmf_tgt_br2" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:30.448 Cannot find device "nvmf_br" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:30.448 Cannot find device "nvmf_init_if" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:30.448 Cannot find device "nvmf_init_if2" 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.448 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:30.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:30.708 00:15:30.708 --- 10.0.0.3 ping statistics --- 00:15:30.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.708 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:30.708 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:30.708 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:30.708 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:30.708 00:15:30.708 --- 10.0.0.4 ping statistics --- 00:15:30.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.709 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:30.709 00:15:30.709 --- 10.0.0.1 ping statistics --- 00:15:30.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.709 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:30.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:15:30.709 00:15:30.709 --- 10.0.0.2 ping statistics --- 00:15:30.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.709 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:30.709 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:30.968 12:50:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:31.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:31.227 Waiting for block devices as requested 00:15:31.227 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:31.486 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:31.486 No valid GPT data, bailing 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:31.486 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:31.746 No valid GPT data, bailing 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:31.746 No valid GPT data, bailing 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:31.746 No valid GPT data, bailing 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -a 10.0.0.1 -t tcp -s 4420 00:15:31.746 00:15:31.746 Discovery Log Number of Records 2, Generation counter 2 00:15:31.746 =====Discovery Log Entry 0====== 00:15:31.746 trtype: tcp 00:15:31.746 adrfam: ipv4 00:15:31.746 subtype: current discovery subsystem 00:15:31.746 treq: not specified, sq flow control disable supported 00:15:31.746 portid: 1 00:15:31.746 trsvcid: 4420 00:15:31.746 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:31.746 traddr: 10.0.0.1 00:15:31.746 eflags: none 00:15:31.746 sectype: none 00:15:31.746 =====Discovery Log Entry 1====== 00:15:31.746 trtype: tcp 00:15:31.746 adrfam: ipv4 00:15:31.746 subtype: nvme subsystem 00:15:31.746 treq: not specified, sq flow control disable supported 00:15:31.746 portid: 1 00:15:31.746 trsvcid: 4420 00:15:31.746 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:31.746 traddr: 10.0.0.1 00:15:31.746 eflags: none 00:15:31.746 sectype: none 00:15:31.746 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:31.746 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:32.006 ===================================================== 00:15:32.006 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:32.006 ===================================================== 00:15:32.006 Controller Capabilities/Features 00:15:32.006 ================================ 00:15:32.006 Vendor ID: 0000 00:15:32.006 Subsystem Vendor ID: 0000 00:15:32.006 Serial Number: 2359e01564674b737eab 00:15:32.006 Model Number: Linux 00:15:32.006 Firmware Version: 6.8.9-20 00:15:32.006 Recommended Arb Burst: 0 00:15:32.006 IEEE OUI Identifier: 00 00 00 00:15:32.006 Multi-path I/O 00:15:32.006 May have multiple subsystem ports: No 00:15:32.006 May have multiple controllers: No 00:15:32.006 Associated with SR-IOV VF: No 00:15:32.006 Max Data Transfer Size: Unlimited 00:15:32.006 Max Number of Namespaces: 0 00:15:32.006 Max Number of I/O Queues: 1024 00:15:32.006 NVMe Specification Version (VS): 1.3 00:15:32.006 NVMe Specification Version (Identify): 1.3 00:15:32.006 Maximum Queue Entries: 1024 00:15:32.006 Contiguous Queues Required: No 00:15:32.006 Arbitration Mechanisms Supported 00:15:32.006 Weighted Round Robin: Not Supported 00:15:32.006 Vendor Specific: Not Supported 00:15:32.006 Reset Timeout: 7500 ms 00:15:32.006 Doorbell Stride: 4 bytes 00:15:32.006 NVM Subsystem Reset: Not Supported 00:15:32.006 Command Sets Supported 00:15:32.006 NVM Command Set: Supported 00:15:32.006 Boot Partition: Not Supported 00:15:32.006 Memory Page Size Minimum: 4096 bytes 00:15:32.006 Memory Page Size Maximum: 4096 bytes 00:15:32.006 Persistent Memory Region: Not Supported 00:15:32.006 Optional Asynchronous Events Supported 00:15:32.006 Namespace Attribute Notices: Not Supported 00:15:32.006 Firmware Activation Notices: Not Supported 00:15:32.006 ANA Change Notices: Not Supported 00:15:32.006 PLE Aggregate Log Change Notices: Not Supported 00:15:32.006 LBA Status Info Alert Notices: Not Supported 00:15:32.006 EGE Aggregate Log Change Notices: Not Supported 00:15:32.006 Normal NVM Subsystem Shutdown event: Not Supported 00:15:32.006 Zone Descriptor Change Notices: Not Supported 00:15:32.006 Discovery Log Change Notices: Supported 00:15:32.006 Controller Attributes 00:15:32.006 128-bit Host Identifier: Not Supported 00:15:32.006 Non-Operational Permissive Mode: Not Supported 00:15:32.006 NVM Sets: Not Supported 00:15:32.006 Read Recovery Levels: Not Supported 00:15:32.006 Endurance Groups: Not Supported 00:15:32.006 Predictable Latency Mode: Not Supported 00:15:32.006 Traffic Based Keep ALive: Not Supported 00:15:32.006 Namespace Granularity: Not Supported 00:15:32.006 SQ Associations: Not Supported 00:15:32.006 UUID List: Not Supported 00:15:32.006 Multi-Domain Subsystem: Not Supported 00:15:32.006 Fixed Capacity Management: Not Supported 00:15:32.006 Variable Capacity Management: Not Supported 00:15:32.006 Delete Endurance Group: Not Supported 00:15:32.006 Delete NVM Set: Not Supported 00:15:32.006 Extended LBA Formats Supported: Not Supported 00:15:32.006 Flexible Data Placement Supported: Not Supported 00:15:32.006 00:15:32.006 Controller Memory Buffer Support 00:15:32.006 ================================ 00:15:32.006 Supported: No 00:15:32.006 00:15:32.006 Persistent Memory Region Support 00:15:32.006 ================================ 00:15:32.006 Supported: No 00:15:32.006 00:15:32.006 Admin Command Set Attributes 00:15:32.006 ============================ 00:15:32.006 Security Send/Receive: Not Supported 00:15:32.006 Format NVM: Not Supported 00:15:32.006 Firmware Activate/Download: Not Supported 00:15:32.006 Namespace Management: Not Supported 00:15:32.007 Device Self-Test: Not Supported 00:15:32.007 Directives: Not Supported 00:15:32.007 NVMe-MI: Not Supported 00:15:32.007 Virtualization Management: Not Supported 00:15:32.007 Doorbell Buffer Config: Not Supported 00:15:32.007 Get LBA Status Capability: Not Supported 00:15:32.007 Command & Feature Lockdown Capability: Not Supported 00:15:32.007 Abort Command Limit: 1 00:15:32.007 Async Event Request Limit: 1 00:15:32.007 Number of Firmware Slots: N/A 00:15:32.007 Firmware Slot 1 Read-Only: N/A 00:15:32.007 Firmware Activation Without Reset: N/A 00:15:32.007 Multiple Update Detection Support: N/A 00:15:32.007 Firmware Update Granularity: No Information Provided 00:15:32.007 Per-Namespace SMART Log: No 00:15:32.007 Asymmetric Namespace Access Log Page: Not Supported 00:15:32.007 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:32.007 Command Effects Log Page: Not Supported 00:15:32.007 Get Log Page Extended Data: Supported 00:15:32.007 Telemetry Log Pages: Not Supported 00:15:32.007 Persistent Event Log Pages: Not Supported 00:15:32.007 Supported Log Pages Log Page: May Support 00:15:32.007 Commands Supported & Effects Log Page: Not Supported 00:15:32.007 Feature Identifiers & Effects Log Page:May Support 00:15:32.007 NVMe-MI Commands & Effects Log Page: May Support 00:15:32.007 Data Area 4 for Telemetry Log: Not Supported 00:15:32.007 Error Log Page Entries Supported: 1 00:15:32.007 Keep Alive: Not Supported 00:15:32.007 00:15:32.007 NVM Command Set Attributes 00:15:32.007 ========================== 00:15:32.007 Submission Queue Entry Size 00:15:32.007 Max: 1 00:15:32.007 Min: 1 00:15:32.007 Completion Queue Entry Size 00:15:32.007 Max: 1 00:15:32.007 Min: 1 00:15:32.007 Number of Namespaces: 0 00:15:32.007 Compare Command: Not Supported 00:15:32.007 Write Uncorrectable Command: Not Supported 00:15:32.007 Dataset Management Command: Not Supported 00:15:32.007 Write Zeroes Command: Not Supported 00:15:32.007 Set Features Save Field: Not Supported 00:15:32.007 Reservations: Not Supported 00:15:32.007 Timestamp: Not Supported 00:15:32.007 Copy: Not Supported 00:15:32.007 Volatile Write Cache: Not Present 00:15:32.007 Atomic Write Unit (Normal): 1 00:15:32.007 Atomic Write Unit (PFail): 1 00:15:32.007 Atomic Compare & Write Unit: 1 00:15:32.007 Fused Compare & Write: Not Supported 00:15:32.007 Scatter-Gather List 00:15:32.007 SGL Command Set: Supported 00:15:32.007 SGL Keyed: Not Supported 00:15:32.007 SGL Bit Bucket Descriptor: Not Supported 00:15:32.007 SGL Metadata Pointer: Not Supported 00:15:32.007 Oversized SGL: Not Supported 00:15:32.007 SGL Metadata Address: Not Supported 00:15:32.007 SGL Offset: Supported 00:15:32.007 Transport SGL Data Block: Not Supported 00:15:32.007 Replay Protected Memory Block: Not Supported 00:15:32.007 00:15:32.007 Firmware Slot Information 00:15:32.007 ========================= 00:15:32.007 Active slot: 0 00:15:32.007 00:15:32.007 00:15:32.007 Error Log 00:15:32.007 ========= 00:15:32.007 00:15:32.007 Active Namespaces 00:15:32.007 ================= 00:15:32.007 Discovery Log Page 00:15:32.007 ================== 00:15:32.007 Generation Counter: 2 00:15:32.007 Number of Records: 2 00:15:32.007 Record Format: 0 00:15:32.007 00:15:32.007 Discovery Log Entry 0 00:15:32.007 ---------------------- 00:15:32.007 Transport Type: 3 (TCP) 00:15:32.007 Address Family: 1 (IPv4) 00:15:32.007 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:32.007 Entry Flags: 00:15:32.007 Duplicate Returned Information: 0 00:15:32.007 Explicit Persistent Connection Support for Discovery: 0 00:15:32.007 Transport Requirements: 00:15:32.007 Secure Channel: Not Specified 00:15:32.007 Port ID: 1 (0x0001) 00:15:32.007 Controller ID: 65535 (0xffff) 00:15:32.007 Admin Max SQ Size: 32 00:15:32.007 Transport Service Identifier: 4420 00:15:32.007 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:32.007 Transport Address: 10.0.0.1 00:15:32.007 Discovery Log Entry 1 00:15:32.007 ---------------------- 00:15:32.007 Transport Type: 3 (TCP) 00:15:32.007 Address Family: 1 (IPv4) 00:15:32.007 Subsystem Type: 2 (NVM Subsystem) 00:15:32.007 Entry Flags: 00:15:32.007 Duplicate Returned Information: 0 00:15:32.007 Explicit Persistent Connection Support for Discovery: 0 00:15:32.007 Transport Requirements: 00:15:32.007 Secure Channel: Not Specified 00:15:32.007 Port ID: 1 (0x0001) 00:15:32.007 Controller ID: 65535 (0xffff) 00:15:32.007 Admin Max SQ Size: 32 00:15:32.007 Transport Service Identifier: 4420 00:15:32.007 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:32.007 Transport Address: 10.0.0.1 00:15:32.007 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:32.267 get_feature(0x01) failed 00:15:32.267 get_feature(0x02) failed 00:15:32.267 get_feature(0x04) failed 00:15:32.267 ===================================================== 00:15:32.267 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:32.267 ===================================================== 00:15:32.267 Controller Capabilities/Features 00:15:32.267 ================================ 00:15:32.267 Vendor ID: 0000 00:15:32.267 Subsystem Vendor ID: 0000 00:15:32.267 Serial Number: 4da0101ca31e35c38121 00:15:32.267 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:32.267 Firmware Version: 6.8.9-20 00:15:32.267 Recommended Arb Burst: 6 00:15:32.267 IEEE OUI Identifier: 00 00 00 00:15:32.267 Multi-path I/O 00:15:32.267 May have multiple subsystem ports: Yes 00:15:32.267 May have multiple controllers: Yes 00:15:32.267 Associated with SR-IOV VF: No 00:15:32.267 Max Data Transfer Size: Unlimited 00:15:32.267 Max Number of Namespaces: 1024 00:15:32.267 Max Number of I/O Queues: 128 00:15:32.267 NVMe Specification Version (VS): 1.3 00:15:32.267 NVMe Specification Version (Identify): 1.3 00:15:32.267 Maximum Queue Entries: 1024 00:15:32.267 Contiguous Queues Required: No 00:15:32.267 Arbitration Mechanisms Supported 00:15:32.267 Weighted Round Robin: Not Supported 00:15:32.267 Vendor Specific: Not Supported 00:15:32.267 Reset Timeout: 7500 ms 00:15:32.267 Doorbell Stride: 4 bytes 00:15:32.267 NVM Subsystem Reset: Not Supported 00:15:32.267 Command Sets Supported 00:15:32.267 NVM Command Set: Supported 00:15:32.267 Boot Partition: Not Supported 00:15:32.267 Memory Page Size Minimum: 4096 bytes 00:15:32.267 Memory Page Size Maximum: 4096 bytes 00:15:32.267 Persistent Memory Region: Not Supported 00:15:32.267 Optional Asynchronous Events Supported 00:15:32.267 Namespace Attribute Notices: Supported 00:15:32.267 Firmware Activation Notices: Not Supported 00:15:32.267 ANA Change Notices: Supported 00:15:32.267 PLE Aggregate Log Change Notices: Not Supported 00:15:32.268 LBA Status Info Alert Notices: Not Supported 00:15:32.268 EGE Aggregate Log Change Notices: Not Supported 00:15:32.268 Normal NVM Subsystem Shutdown event: Not Supported 00:15:32.268 Zone Descriptor Change Notices: Not Supported 00:15:32.268 Discovery Log Change Notices: Not Supported 00:15:32.268 Controller Attributes 00:15:32.268 128-bit Host Identifier: Supported 00:15:32.268 Non-Operational Permissive Mode: Not Supported 00:15:32.268 NVM Sets: Not Supported 00:15:32.268 Read Recovery Levels: Not Supported 00:15:32.268 Endurance Groups: Not Supported 00:15:32.268 Predictable Latency Mode: Not Supported 00:15:32.268 Traffic Based Keep ALive: Supported 00:15:32.268 Namespace Granularity: Not Supported 00:15:32.268 SQ Associations: Not Supported 00:15:32.268 UUID List: Not Supported 00:15:32.268 Multi-Domain Subsystem: Not Supported 00:15:32.268 Fixed Capacity Management: Not Supported 00:15:32.268 Variable Capacity Management: Not Supported 00:15:32.268 Delete Endurance Group: Not Supported 00:15:32.268 Delete NVM Set: Not Supported 00:15:32.268 Extended LBA Formats Supported: Not Supported 00:15:32.268 Flexible Data Placement Supported: Not Supported 00:15:32.268 00:15:32.268 Controller Memory Buffer Support 00:15:32.268 ================================ 00:15:32.268 Supported: No 00:15:32.268 00:15:32.268 Persistent Memory Region Support 00:15:32.268 ================================ 00:15:32.268 Supported: No 00:15:32.268 00:15:32.268 Admin Command Set Attributes 00:15:32.268 ============================ 00:15:32.268 Security Send/Receive: Not Supported 00:15:32.268 Format NVM: Not Supported 00:15:32.268 Firmware Activate/Download: Not Supported 00:15:32.268 Namespace Management: Not Supported 00:15:32.268 Device Self-Test: Not Supported 00:15:32.268 Directives: Not Supported 00:15:32.268 NVMe-MI: Not Supported 00:15:32.268 Virtualization Management: Not Supported 00:15:32.268 Doorbell Buffer Config: Not Supported 00:15:32.268 Get LBA Status Capability: Not Supported 00:15:32.268 Command & Feature Lockdown Capability: Not Supported 00:15:32.268 Abort Command Limit: 4 00:15:32.268 Async Event Request Limit: 4 00:15:32.268 Number of Firmware Slots: N/A 00:15:32.268 Firmware Slot 1 Read-Only: N/A 00:15:32.268 Firmware Activation Without Reset: N/A 00:15:32.268 Multiple Update Detection Support: N/A 00:15:32.268 Firmware Update Granularity: No Information Provided 00:15:32.268 Per-Namespace SMART Log: Yes 00:15:32.268 Asymmetric Namespace Access Log Page: Supported 00:15:32.268 ANA Transition Time : 10 sec 00:15:32.268 00:15:32.268 Asymmetric Namespace Access Capabilities 00:15:32.268 ANA Optimized State : Supported 00:15:32.268 ANA Non-Optimized State : Supported 00:15:32.268 ANA Inaccessible State : Supported 00:15:32.268 ANA Persistent Loss State : Supported 00:15:32.268 ANA Change State : Supported 00:15:32.268 ANAGRPID is not changed : No 00:15:32.268 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:32.268 00:15:32.268 ANA Group Identifier Maximum : 128 00:15:32.268 Number of ANA Group Identifiers : 128 00:15:32.268 Max Number of Allowed Namespaces : 1024 00:15:32.268 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:32.268 Command Effects Log Page: Supported 00:15:32.268 Get Log Page Extended Data: Supported 00:15:32.268 Telemetry Log Pages: Not Supported 00:15:32.268 Persistent Event Log Pages: Not Supported 00:15:32.268 Supported Log Pages Log Page: May Support 00:15:32.268 Commands Supported & Effects Log Page: Not Supported 00:15:32.268 Feature Identifiers & Effects Log Page:May Support 00:15:32.268 NVMe-MI Commands & Effects Log Page: May Support 00:15:32.268 Data Area 4 for Telemetry Log: Not Supported 00:15:32.268 Error Log Page Entries Supported: 128 00:15:32.268 Keep Alive: Supported 00:15:32.268 Keep Alive Granularity: 1000 ms 00:15:32.268 00:15:32.268 NVM Command Set Attributes 00:15:32.268 ========================== 00:15:32.268 Submission Queue Entry Size 00:15:32.268 Max: 64 00:15:32.268 Min: 64 00:15:32.268 Completion Queue Entry Size 00:15:32.268 Max: 16 00:15:32.268 Min: 16 00:15:32.268 Number of Namespaces: 1024 00:15:32.268 Compare Command: Not Supported 00:15:32.268 Write Uncorrectable Command: Not Supported 00:15:32.268 Dataset Management Command: Supported 00:15:32.268 Write Zeroes Command: Supported 00:15:32.268 Set Features Save Field: Not Supported 00:15:32.268 Reservations: Not Supported 00:15:32.268 Timestamp: Not Supported 00:15:32.268 Copy: Not Supported 00:15:32.268 Volatile Write Cache: Present 00:15:32.268 Atomic Write Unit (Normal): 1 00:15:32.268 Atomic Write Unit (PFail): 1 00:15:32.268 Atomic Compare & Write Unit: 1 00:15:32.268 Fused Compare & Write: Not Supported 00:15:32.268 Scatter-Gather List 00:15:32.268 SGL Command Set: Supported 00:15:32.268 SGL Keyed: Not Supported 00:15:32.268 SGL Bit Bucket Descriptor: Not Supported 00:15:32.268 SGL Metadata Pointer: Not Supported 00:15:32.268 Oversized SGL: Not Supported 00:15:32.268 SGL Metadata Address: Not Supported 00:15:32.268 SGL Offset: Supported 00:15:32.268 Transport SGL Data Block: Not Supported 00:15:32.268 Replay Protected Memory Block: Not Supported 00:15:32.268 00:15:32.268 Firmware Slot Information 00:15:32.268 ========================= 00:15:32.268 Active slot: 0 00:15:32.268 00:15:32.268 Asymmetric Namespace Access 00:15:32.268 =========================== 00:15:32.268 Change Count : 0 00:15:32.268 Number of ANA Group Descriptors : 1 00:15:32.268 ANA Group Descriptor : 0 00:15:32.268 ANA Group ID : 1 00:15:32.268 Number of NSID Values : 1 00:15:32.268 Change Count : 0 00:15:32.268 ANA State : 1 00:15:32.268 Namespace Identifier : 1 00:15:32.268 00:15:32.268 Commands Supported and Effects 00:15:32.268 ============================== 00:15:32.269 Admin Commands 00:15:32.269 -------------- 00:15:32.269 Get Log Page (02h): Supported 00:15:32.269 Identify (06h): Supported 00:15:32.269 Abort (08h): Supported 00:15:32.269 Set Features (09h): Supported 00:15:32.269 Get Features (0Ah): Supported 00:15:32.269 Asynchronous Event Request (0Ch): Supported 00:15:32.269 Keep Alive (18h): Supported 00:15:32.269 I/O Commands 00:15:32.269 ------------ 00:15:32.269 Flush (00h): Supported 00:15:32.269 Write (01h): Supported LBA-Change 00:15:32.269 Read (02h): Supported 00:15:32.269 Write Zeroes (08h): Supported LBA-Change 00:15:32.269 Dataset Management (09h): Supported 00:15:32.269 00:15:32.269 Error Log 00:15:32.269 ========= 00:15:32.269 Entry: 0 00:15:32.269 Error Count: 0x3 00:15:32.269 Submission Queue Id: 0x0 00:15:32.269 Command Id: 0x5 00:15:32.269 Phase Bit: 0 00:15:32.269 Status Code: 0x2 00:15:32.269 Status Code Type: 0x0 00:15:32.269 Do Not Retry: 1 00:15:32.269 Error Location: 0x28 00:15:32.269 LBA: 0x0 00:15:32.269 Namespace: 0x0 00:15:32.269 Vendor Log Page: 0x0 00:15:32.269 ----------- 00:15:32.269 Entry: 1 00:15:32.269 Error Count: 0x2 00:15:32.269 Submission Queue Id: 0x0 00:15:32.269 Command Id: 0x5 00:15:32.269 Phase Bit: 0 00:15:32.269 Status Code: 0x2 00:15:32.269 Status Code Type: 0x0 00:15:32.269 Do Not Retry: 1 00:15:32.269 Error Location: 0x28 00:15:32.269 LBA: 0x0 00:15:32.269 Namespace: 0x0 00:15:32.269 Vendor Log Page: 0x0 00:15:32.269 ----------- 00:15:32.269 Entry: 2 00:15:32.269 Error Count: 0x1 00:15:32.269 Submission Queue Id: 0x0 00:15:32.269 Command Id: 0x4 00:15:32.269 Phase Bit: 0 00:15:32.269 Status Code: 0x2 00:15:32.269 Status Code Type: 0x0 00:15:32.269 Do Not Retry: 1 00:15:32.269 Error Location: 0x28 00:15:32.269 LBA: 0x0 00:15:32.269 Namespace: 0x0 00:15:32.269 Vendor Log Page: 0x0 00:15:32.269 00:15:32.269 Number of Queues 00:15:32.269 ================ 00:15:32.269 Number of I/O Submission Queues: 128 00:15:32.269 Number of I/O Completion Queues: 128 00:15:32.269 00:15:32.269 ZNS Specific Controller Data 00:15:32.269 ============================ 00:15:32.269 Zone Append Size Limit: 0 00:15:32.269 00:15:32.269 00:15:32.269 Active Namespaces 00:15:32.269 ================= 00:15:32.269 get_feature(0x05) failed 00:15:32.269 Namespace ID:1 00:15:32.269 Command Set Identifier: NVM (00h) 00:15:32.269 Deallocate: Supported 00:15:32.269 Deallocated/Unwritten Error: Not Supported 00:15:32.269 Deallocated Read Value: Unknown 00:15:32.269 Deallocate in Write Zeroes: Not Supported 00:15:32.269 Deallocated Guard Field: 0xFFFF 00:15:32.269 Flush: Supported 00:15:32.269 Reservation: Not Supported 00:15:32.269 Namespace Sharing Capabilities: Multiple Controllers 00:15:32.269 Size (in LBAs): 1310720 (5GiB) 00:15:32.269 Capacity (in LBAs): 1310720 (5GiB) 00:15:32.269 Utilization (in LBAs): 1310720 (5GiB) 00:15:32.269 UUID: c85b1ac6-165a-40f5-a913-ff9a1e7c225f 00:15:32.269 Thin Provisioning: Not Supported 00:15:32.269 Per-NS Atomic Units: Yes 00:15:32.269 Atomic Boundary Size (Normal): 0 00:15:32.269 Atomic Boundary Size (PFail): 0 00:15:32.269 Atomic Boundary Offset: 0 00:15:32.269 NGUID/EUI64 Never Reused: No 00:15:32.269 ANA group ID: 1 00:15:32.269 Namespace Write Protected: No 00:15:32.269 Number of LBA Formats: 1 00:15:32.269 Current LBA Format: LBA Format #00 00:15:32.269 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:32.269 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.269 rmmod nvme_tcp 00:15:32.269 rmmod nvme_fabrics 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:32.269 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:32.270 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.538 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.538 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.538 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.538 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.538 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:32.538 12:50:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:15:32.538 12:50:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:33.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:33.475 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:33.475 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:33.475 00:15:33.475 real 0m3.334s 00:15:33.475 user 0m1.187s 00:15:33.475 sys 0m1.510s 00:15:33.475 12:50:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.475 12:50:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.475 ************************************ 00:15:33.475 END TEST nvmf_identify_kernel_target 00:15:33.475 ************************************ 00:15:33.475 12:50:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:33.475 12:50:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.475 12:50:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.475 12:50:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.475 ************************************ 00:15:33.475 START TEST nvmf_auth_host 00:15:33.475 ************************************ 00:15:33.475 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:33.735 * Looking for test storage... 00:15:33.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:15:33.735 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:33.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.736 --rc genhtml_branch_coverage=1 00:15:33.736 --rc genhtml_function_coverage=1 00:15:33.736 --rc genhtml_legend=1 00:15:33.736 --rc geninfo_all_blocks=1 00:15:33.736 --rc geninfo_unexecuted_blocks=1 00:15:33.736 00:15:33.736 ' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:33.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.736 --rc genhtml_branch_coverage=1 00:15:33.736 --rc genhtml_function_coverage=1 00:15:33.736 --rc genhtml_legend=1 00:15:33.736 --rc geninfo_all_blocks=1 00:15:33.736 --rc geninfo_unexecuted_blocks=1 00:15:33.736 00:15:33.736 ' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:33.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.736 --rc genhtml_branch_coverage=1 00:15:33.736 --rc genhtml_function_coverage=1 00:15:33.736 --rc genhtml_legend=1 00:15:33.736 --rc geninfo_all_blocks=1 00:15:33.736 --rc geninfo_unexecuted_blocks=1 00:15:33.736 00:15:33.736 ' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:33.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.736 --rc genhtml_branch_coverage=1 00:15:33.736 --rc genhtml_function_coverage=1 00:15:33.736 --rc genhtml_legend=1 00:15:33.736 --rc geninfo_all_blocks=1 00:15:33.736 --rc geninfo_unexecuted_blocks=1 00:15:33.736 00:15:33.736 ' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:33.736 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:33.737 Cannot find device "nvmf_init_br" 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:33.737 Cannot find device "nvmf_init_br2" 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:33.737 Cannot find device "nvmf_tgt_br" 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:15:33.737 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.996 Cannot find device "nvmf_tgt_br2" 00:15:33.996 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:15:33.996 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:33.997 Cannot find device "nvmf_init_br" 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:33.997 Cannot find device "nvmf_init_br2" 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:33.997 Cannot find device "nvmf_tgt_br" 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:33.997 Cannot find device "nvmf_tgt_br2" 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:33.997 Cannot find device "nvmf_br" 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:33.997 Cannot find device "nvmf_init_if" 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:33.997 Cannot find device "nvmf_init_if2" 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.997 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:34.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:15:34.256 00:15:34.256 --- 10.0.0.3 ping statistics --- 00:15:34.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.256 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:34.256 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:34.256 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:34.256 00:15:34.256 --- 10.0.0.4 ping statistics --- 00:15:34.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.256 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:34.256 00:15:34.256 --- 10.0.0.1 ping statistics --- 00:15:34.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.256 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:34.256 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:34.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:15:34.256 00:15:34.256 --- 10.0.0.2 ping statistics --- 00:15:34.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.257 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=77684 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 77684 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77684 ']' 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.257 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=303527c3fb0c09bbd1332f53c70c734a 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pLE 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 303527c3fb0c09bbd1332f53c70c734a 0 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 303527c3fb0c09bbd1332f53c70c734a 0 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=303527c3fb0c09bbd1332f53c70c734a 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:34.516 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pLE 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pLE 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pLE 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7e8b82a1be0372e68611d02acad7218a6f8d8b159eb83f18c3a6072b41eba9e4 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8Gv 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7e8b82a1be0372e68611d02acad7218a6f8d8b159eb83f18c3a6072b41eba9e4 3 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7e8b82a1be0372e68611d02acad7218a6f8d8b159eb83f18c3a6072b41eba9e4 3 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7e8b82a1be0372e68611d02acad7218a6f8d8b159eb83f18c3a6072b41eba9e4 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8Gv 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8Gv 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.8Gv 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e63ae423f3c2261c0d6bf46e3f5ae7a819be9803087fcae4 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pmj 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e63ae423f3c2261c0d6bf46e3f5ae7a819be9803087fcae4 0 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e63ae423f3c2261c0d6bf46e3f5ae7a819be9803087fcae4 0 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e63ae423f3c2261c0d6bf46e3f5ae7a819be9803087fcae4 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pmj 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pmj 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pmj 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=585201316d935e3240fd340ffbd065f8394f973c728eeef9 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.C1x 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 585201316d935e3240fd340ffbd065f8394f973c728eeef9 2 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 585201316d935e3240fd340ffbd065f8394f973c728eeef9 2 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=585201316d935e3240fd340ffbd065f8394f973c728eeef9 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.C1x 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.C1x 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.C1x 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b9dc26fac1b67da29f653f92db0fbd2 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.bp0 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b9dc26fac1b67da29f653f92db0fbd2 1 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b9dc26fac1b67da29f653f92db0fbd2 1 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b9dc26fac1b67da29f653f92db0fbd2 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:34.776 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.bp0 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.bp0 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.bp0 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5d1705cdf3b291037efa6736e3baa274 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rQ2 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5d1705cdf3b291037efa6736e3baa274 1 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5d1705cdf3b291037efa6736e3baa274 1 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5d1705cdf3b291037efa6736e3baa274 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rQ2 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rQ2 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rQ2 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a74c11311525140a631688fc0671ee6008033a87409e769 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bls 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a74c11311525140a631688fc0671ee6008033a87409e769 2 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a74c11311525140a631688fc0671ee6008033a87409e769 2 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a74c11311525140a631688fc0671ee6008033a87409e769 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bls 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bls 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Bls 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ef8e1a98ab4e58347c624e009971ae3c 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Dyh 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ef8e1a98ab4e58347c624e009971ae3c 0 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ef8e1a98ab4e58347c624e009971ae3c 0 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ef8e1a98ab4e58347c624e009971ae3c 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Dyh 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Dyh 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Dyh 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b468cf3f39363dbdf4828b97d24d22f9c5fe87313fd180292323a2c8a83052a1 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Pq6 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b468cf3f39363dbdf4828b97d24d22f9c5fe87313fd180292323a2c8a83052a1 3 00:15:35.036 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b468cf3f39363dbdf4828b97d24d22f9c5fe87313fd180292323a2c8a83052a1 3 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b468cf3f39363dbdf4828b97d24d22f9c5fe87313fd180292323a2c8a83052a1 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Pq6 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Pq6 00:15:35.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Pq6 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77684 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77684 ']' 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.295 12:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pLE 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.8Gv ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Gv 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pmj 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.C1x ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.C1x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.bp0 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rQ2 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rQ2 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Bls 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Dyh ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Dyh 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Pq6 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.554 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:35.555 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:36.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:36.123 Waiting for block devices as requested 00:15:36.123 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:36.123 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:36.690 No valid GPT data, bailing 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:36.690 No valid GPT data, bailing 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:36.690 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:36.949 No valid GPT data, bailing 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:36.949 No valid GPT data, bailing 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -a 10.0.0.1 -t tcp -s 4420 00:15:36.949 00:15:36.949 Discovery Log Number of Records 2, Generation counter 2 00:15:36.949 =====Discovery Log Entry 0====== 00:15:36.949 trtype: tcp 00:15:36.949 adrfam: ipv4 00:15:36.949 subtype: current discovery subsystem 00:15:36.949 treq: not specified, sq flow control disable supported 00:15:36.949 portid: 1 00:15:36.949 trsvcid: 4420 00:15:36.949 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:36.949 traddr: 10.0.0.1 00:15:36.949 eflags: none 00:15:36.949 sectype: none 00:15:36.949 =====Discovery Log Entry 1====== 00:15:36.949 trtype: tcp 00:15:36.949 adrfam: ipv4 00:15:36.949 subtype: nvme subsystem 00:15:36.949 treq: not specified, sq flow control disable supported 00:15:36.949 portid: 1 00:15:36.949 trsvcid: 4420 00:15:36.949 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:36.949 traddr: 10.0.0.1 00:15:36.949 eflags: none 00:15:36.949 sectype: none 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:36.949 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 nvme0n1 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:37.209 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.210 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.469 nvme0n1 00:15:37.469 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.469 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.469 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.469 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.469 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.469 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.469 nvme0n1 00:15:37.469 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:37.728 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.729 nvme0n1 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.729 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.988 nvme0n1 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:15:37.988 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.989 nvme0n1 00:15:37.989 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:38.248 12:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.507 nvme0n1 00:15:38.507 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.766 nvme0n1 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.766 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:39.025 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:39.025 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:39.025 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:39.025 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:39.025 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:39.025 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:39.025 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.026 nvme0n1 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.026 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.285 nvme0n1 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.285 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 nvme0n1 00:15:39.545 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.545 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.545 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.545 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.545 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 12:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:39.545 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.114 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.374 nvme0n1 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.374 12:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.633 nvme0n1 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:40.633 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.634 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.893 nvme0n1 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:40.893 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.894 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.153 nvme0n1 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.153 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.412 nvme0n1 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:41.413 12:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.789 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.790 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.356 nvme0n1 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.356 12:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.614 nvme0n1 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.614 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.615 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.873 nvme0n1 00:15:43.873 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.873 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.873 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.873 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.873 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.132 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.132 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.132 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.133 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.392 nvme0n1 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.392 12:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.392 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:44.393 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.393 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:44.393 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:44.393 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:44.393 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:44.393 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.393 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.652 nvme0n1 00:15:44.652 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.652 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.652 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.912 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.479 nvme0n1 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.479 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:15:45.480 12:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.480 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.048 nvme0n1 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.048 12:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 nvme0n1 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.628 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 nvme0n1 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.196 12:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.765 nvme0n1 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.765 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.024 nvme0n1 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:48.024 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.025 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.284 nvme0n1 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.284 nvme0n1 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.284 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.544 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.545 12:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.545 nvme0n1 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.545 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.804 nvme0n1 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.804 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.805 nvme0n1 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.805 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:49.064 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.065 nvme0n1 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.065 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:49.324 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.325 nvme0n1 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.325 12:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.585 nvme0n1 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:15:49.585 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.586 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.845 nvme0n1 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.845 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.846 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.106 nvme0n1 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:50.106 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.107 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.367 nvme0n1 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.367 12:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.627 nvme0n1 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.627 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.887 nvme0n1 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:50.887 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.888 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.147 nvme0n1 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.147 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.148 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.408 nvme0n1 00:15:51.408 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.408 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.408 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.408 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.408 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.408 12:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:51.408 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.409 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.978 nvme0n1 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:51.978 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.979 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.979 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.238 nvme0n1 00:15:52.238 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.238 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.239 12:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.499 nvme0n1 00:15:52.499 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.499 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.499 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.499 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.499 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.499 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.759 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.018 nvme0n1 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:15:53.018 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.019 12:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 nvme0n1 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.588 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.589 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.157 nvme0n1 00:15:54.157 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.158 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:54.158 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:54.158 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.158 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.158 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.418 12:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.991 nvme0n1 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.991 12:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.621 nvme0n1 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:55.621 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.622 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.206 nvme0n1 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:56.206 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.207 nvme0n1 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.207 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.473 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.474 12:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.474 nvme0n1 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.474 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.735 nvme0n1 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.735 nvme0n1 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.735 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:56.995 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.996 nvme0n1 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.996 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.300 nvme0n1 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:57.300 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 nvme0n1 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:57.560 12:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:57.560 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.561 nvme0n1 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.561 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:57.820 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 nvme0n1 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.080 nvme0n1 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.080 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.339 nvme0n1 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.339 12:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.598 nvme0n1 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:58.598 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.599 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.858 nvme0n1 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.858 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.859 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.117 nvme0n1 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.117 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.118 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.377 nvme0n1 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.377 12:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.636 nvme0n1 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.636 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.896 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.155 nvme0n1 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.155 12:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.414 nvme0n1 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:00.414 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.415 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.674 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.933 nvme0n1 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:00.933 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.934 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.192 nvme0n1 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAzNTI3YzNmYjBjMDliYmQxMzMyZjUzYzcwYzczNGFEGgQ1: 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: ]] 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U4YjgyYTFiZTAzNzJlNjg2MTFkMDJhY2FkNzIxOGE2ZjhkOGIxNTllYjgzZjE4YzNhNjA3MmI0MWViYTllNKWBMLE=: 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.193 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.451 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.451 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:01.451 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.452 12:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.711 nvme0n1 00:16:01.711 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.711 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.711 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.711 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.711 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.711 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.970 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.971 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.539 nvme0n1 00:16:02.539 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.539 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.539 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.540 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.540 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.540 12:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.540 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.108 nvme0n1 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE3NGMxMTMxMTUyNTE0MGE2MzE2ODhmYzA2NzFlZTYwMDgwMzNhODc0MDllNzY5wXOezg==: 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: ]] 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWY4ZTFhOThhYjRlNTgzNDdjNjI0ZTAwOTk3MWFlM2OB9dYA: 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.108 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.109 12:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.676 nvme0n1 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjQ2OGNmM2YzOTM2M2RiZGY0ODI4Yjk3ZDI0ZDIyZjljNWZlODczMTNmZDE4MDI5MjMyM2EyYzhhODMwNTJhMfxAxHc=: 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.676 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.244 nvme0n1 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.244 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.245 request: 00:16:04.245 { 00:16:04.245 "name": "nvme0", 00:16:04.245 "trtype": "tcp", 00:16:04.245 "traddr": "10.0.0.1", 00:16:04.245 "adrfam": "ipv4", 00:16:04.245 "trsvcid": "4420", 00:16:04.245 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:04.245 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:04.245 "prchk_reftag": false, 00:16:04.245 "prchk_guard": false, 00:16:04.245 "hdgst": false, 00:16:04.245 "ddgst": false, 00:16:04.245 "allow_unrecognized_csi": false, 00:16:04.245 "method": "bdev_nvme_attach_controller", 00:16:04.245 "req_id": 1 00:16:04.245 } 00:16:04.245 Got JSON-RPC error response 00:16:04.245 response: 00:16:04.245 { 00:16:04.245 "code": -5, 00:16:04.245 "message": "Input/output error" 00:16:04.245 } 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.245 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.504 request: 00:16:04.504 { 00:16:04.504 "name": "nvme0", 00:16:04.504 "trtype": "tcp", 00:16:04.504 "traddr": "10.0.0.1", 00:16:04.504 "adrfam": "ipv4", 00:16:04.504 "trsvcid": "4420", 00:16:04.504 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:04.504 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:04.504 "prchk_reftag": false, 00:16:04.504 "prchk_guard": false, 00:16:04.504 "hdgst": false, 00:16:04.504 "ddgst": false, 00:16:04.504 "dhchap_key": "key2", 00:16:04.504 "allow_unrecognized_csi": false, 00:16:04.504 "method": "bdev_nvme_attach_controller", 00:16:04.504 "req_id": 1 00:16:04.504 } 00:16:04.504 Got JSON-RPC error response 00:16:04.504 response: 00:16:04.504 { 00:16:04.504 "code": -5, 00:16:04.504 "message": "Input/output error" 00:16:04.504 } 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.504 12:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.504 request: 00:16:04.504 { 00:16:04.504 "name": "nvme0", 00:16:04.504 "trtype": "tcp", 00:16:04.504 "traddr": "10.0.0.1", 00:16:04.504 "adrfam": "ipv4", 00:16:04.504 "trsvcid": "4420", 00:16:04.504 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:04.504 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:04.504 "prchk_reftag": false, 00:16:04.504 "prchk_guard": false, 00:16:04.504 "hdgst": false, 00:16:04.504 "ddgst": false, 00:16:04.504 "dhchap_key": "key1", 00:16:04.504 "dhchap_ctrlr_key": "ckey2", 00:16:04.504 "allow_unrecognized_csi": false, 00:16:04.504 "method": "bdev_nvme_attach_controller", 00:16:04.504 "req_id": 1 00:16:04.504 } 00:16:04.504 Got JSON-RPC error response 00:16:04.504 response: 00:16:04.504 { 00:16:04.504 "code": -5, 00:16:04.504 "message": "Input/output error" 00:16:04.504 } 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.504 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.504 nvme0n1 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.763 request: 00:16:04.763 { 00:16:04.763 "name": "nvme0", 00:16:04.763 "dhchap_key": "key1", 00:16:04.763 "dhchap_ctrlr_key": "ckey2", 00:16:04.763 "method": "bdev_nvme_set_keys", 00:16:04.763 "req_id": 1 00:16:04.763 } 00:16:04.763 Got JSON-RPC error response 00:16:04.763 response: 00:16:04.763 { 00:16:04.763 "code": -13, 00:16:04.763 "message": "Permission denied" 00:16:04.763 } 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:16:04.763 12:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:16:05.696 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.696 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:05.696 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.696 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.696 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.955 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:16:05.955 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:05.955 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.955 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:05.955 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:05.955 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:05.955 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTYzYWU0MjNmM2MyMjYxYzBkNmJmNDZlM2Y1YWU3YTgxOWJlOTgwMzA4N2ZjYWU0xu5z1Q==: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTg1MjAxMzE2ZDkzNWUzMjQwZmQzNDBmZmJkMDY1ZjgzOTRmOTczYzcyOGVlZWY5WfJXXQ==: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.956 nvme0n1 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI5ZGMyNmZhYzFiNjdkYTI5ZjY1M2Y5MmRiMGZiZDLD6yl7: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQxNzA1Y2RmM2IyOTEwMzdlZmE2NzM2ZTNiYWEyNzQ4p1Ww: 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.956 request: 00:16:05.956 { 00:16:05.956 "name": "nvme0", 00:16:05.956 "dhchap_key": "key2", 00:16:05.956 "dhchap_ctrlr_key": "ckey1", 00:16:05.956 "method": "bdev_nvme_set_keys", 00:16:05.956 "req_id": 1 00:16:05.956 } 00:16:05.956 Got JSON-RPC error response 00:16:05.956 response: 00:16:05.956 { 00:16:05.956 "code": -13, 00:16:05.956 "message": "Permission denied" 00:16:05.956 } 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:16:05.956 12:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.333 rmmod nvme_tcp 00:16:07.333 rmmod nvme_fabrics 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 77684 ']' 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 77684 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 77684 ']' 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 77684 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.333 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77684 00:16:07.333 killing process with pid 77684 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77684' 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 77684 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 77684 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:07.334 12:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:07.592 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:07.593 12:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:08.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:08.420 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:08.420 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:08.420 12:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pLE /tmp/spdk.key-null.pmj /tmp/spdk.key-sha256.bp0 /tmp/spdk.key-sha384.Bls /tmp/spdk.key-sha512.Pq6 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:08.420 12:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:08.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:08.990 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:08.990 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:08.990 00:16:08.990 real 0m35.298s 00:16:08.990 user 0m32.659s 00:16:08.990 sys 0m3.774s 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.990 ************************************ 00:16:08.990 END TEST nvmf_auth_host 00:16:08.990 ************************************ 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.990 ************************************ 00:16:08.990 START TEST nvmf_digest 00:16:08.990 ************************************ 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:08.990 * Looking for test storage... 00:16:08.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:16:08.990 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.251 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.252 --rc genhtml_branch_coverage=1 00:16:09.252 --rc genhtml_function_coverage=1 00:16:09.252 --rc genhtml_legend=1 00:16:09.252 --rc geninfo_all_blocks=1 00:16:09.252 --rc geninfo_unexecuted_blocks=1 00:16:09.252 00:16:09.252 ' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.252 --rc genhtml_branch_coverage=1 00:16:09.252 --rc genhtml_function_coverage=1 00:16:09.252 --rc genhtml_legend=1 00:16:09.252 --rc geninfo_all_blocks=1 00:16:09.252 --rc geninfo_unexecuted_blocks=1 00:16:09.252 00:16:09.252 ' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.252 --rc genhtml_branch_coverage=1 00:16:09.252 --rc genhtml_function_coverage=1 00:16:09.252 --rc genhtml_legend=1 00:16:09.252 --rc geninfo_all_blocks=1 00:16:09.252 --rc geninfo_unexecuted_blocks=1 00:16:09.252 00:16:09.252 ' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.252 --rc genhtml_branch_coverage=1 00:16:09.252 --rc genhtml_function_coverage=1 00:16:09.252 --rc genhtml_legend=1 00:16:09.252 --rc geninfo_all_blocks=1 00:16:09.252 --rc geninfo_unexecuted_blocks=1 00:16:09.252 00:16:09.252 ' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.252 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:09.252 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:09.253 Cannot find device "nvmf_init_br" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:09.253 Cannot find device "nvmf_init_br2" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:09.253 Cannot find device "nvmf_tgt_br" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:09.253 Cannot find device "nvmf_tgt_br2" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:09.253 Cannot find device "nvmf_init_br" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:09.253 Cannot find device "nvmf_init_br2" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:09.253 Cannot find device "nvmf_tgt_br" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:09.253 Cannot find device "nvmf_tgt_br2" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:09.253 Cannot find device "nvmf_br" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:09.253 Cannot find device "nvmf_init_if" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:09.253 Cannot find device "nvmf_init_if2" 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.253 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.513 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.513 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:09.513 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:09.513 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:09.513 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:09.514 12:51:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:09.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:09.514 00:16:09.514 --- 10.0.0.3 ping statistics --- 00:16:09.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.514 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:09.514 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:09.514 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:16:09.514 00:16:09.514 --- 10.0.0.4 ping statistics --- 00:16:09.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.514 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:09.514 00:16:09.514 --- 10.0.0.1 ping statistics --- 00:16:09.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.514 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:09.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:16:09.514 00:16:09.514 --- 10.0.0.2 ping statistics --- 00:16:09.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.514 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:09.514 ************************************ 00:16:09.514 START TEST nvmf_digest_clean 00:16:09.514 ************************************ 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79320 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79320 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79320 ']' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.514 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:09.774 [2024-11-15 12:51:18.190703] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:09.774 [2024-11-15 12:51:18.190784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.774 [2024-11-15 12:51:18.342790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.774 [2024-11-15 12:51:18.381485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.774 [2024-11-15 12:51:18.381546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.774 [2024-11-15 12:51:18.381560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.774 [2024-11-15 12:51:18.381570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.774 [2024-11-15 12:51:18.381579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.774 [2024-11-15 12:51:18.381974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:10.034 [2024-11-15 12:51:18.522148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.034 null0 00:16:10.034 [2024-11-15 12:51:18.559383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.034 [2024-11-15 12:51:18.583498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79345 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79345 /var/tmp/bperf.sock 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79345 ']' 00:16:10.034 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:10.035 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:10.035 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:10.035 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.035 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:10.035 [2024-11-15 12:51:18.646958] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:10.035 [2024-11-15 12:51:18.647041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79345 ] 00:16:10.294 [2024-11-15 12:51:18.794330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.294 [2024-11-15 12:51:18.824884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.294 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.294 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:10.294 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:10.294 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:10.294 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:10.554 [2024-11-15 12:51:19.207409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.813 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:10.813 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:11.073 nvme0n1 00:16:11.073 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:11.073 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:11.073 Running I/O for 2 seconds... 00:16:13.391 17526.00 IOPS, 68.46 MiB/s [2024-11-15T12:51:22.061Z] 17780.00 IOPS, 69.45 MiB/s 00:16:13.391 Latency(us) 00:16:13.391 [2024-11-15T12:51:22.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.391 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:13.391 nvme0n1 : 2.01 17773.74 69.43 0.00 0.00 7196.32 6613.18 22163.08 00:16:13.391 [2024-11-15T12:51:22.061Z] =================================================================================================================== 00:16:13.391 [2024-11-15T12:51:22.061Z] Total : 17773.74 69.43 0.00 0.00 7196.32 6613.18 22163.08 00:16:13.391 { 00:16:13.391 "results": [ 00:16:13.391 { 00:16:13.391 "job": "nvme0n1", 00:16:13.391 "core_mask": "0x2", 00:16:13.391 "workload": "randread", 00:16:13.391 "status": "finished", 00:16:13.391 "queue_depth": 128, 00:16:13.391 "io_size": 4096, 00:16:13.391 "runtime": 2.007906, 00:16:13.391 "iops": 17773.740404182267, 00:16:13.391 "mibps": 69.42867345383698, 00:16:13.391 "io_failed": 0, 00:16:13.391 "io_timeout": 0, 00:16:13.391 "avg_latency_us": 7196.317499133908, 00:16:13.391 "min_latency_us": 6613.178181818182, 00:16:13.391 "max_latency_us": 22163.083636363637 00:16:13.391 } 00:16:13.391 ], 00:16:13.391 "core_count": 1 00:16:13.391 } 00:16:13.391 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:13.391 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:13.391 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:13.391 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:13.391 | select(.opcode=="crc32c") 00:16:13.391 | "\(.module_name) \(.executed)"' 00:16:13.391 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79345 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79345 ']' 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79345 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.391 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79345 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:13.651 killing process with pid 79345 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79345' 00:16:13.651 Received shutdown signal, test time was about 2.000000 seconds 00:16:13.651 00:16:13.651 Latency(us) 00:16:13.651 [2024-11-15T12:51:22.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.651 [2024-11-15T12:51:22.321Z] =================================================================================================================== 00:16:13.651 [2024-11-15T12:51:22.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79345 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79345 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79392 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79392 /var/tmp/bperf.sock 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79392 ']' 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.651 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:13.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:13.651 Zero copy mechanism will not be used. 00:16:13.651 [2024-11-15 12:51:22.239768] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:13.651 [2024-11-15 12:51:22.239876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79392 ] 00:16:13.910 [2024-11-15 12:51:22.382844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.910 [2024-11-15 12:51:22.428554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.910 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.910 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:13.910 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:13.910 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:13.911 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:14.170 [2024-11-15 12:51:22.767388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.170 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:14.170 12:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:14.737 nvme0n1 00:16:14.737 12:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:14.737 12:51:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:14.737 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:14.737 Zero copy mechanism will not be used. 00:16:14.737 Running I/O for 2 seconds... 00:16:16.620 8624.00 IOPS, 1078.00 MiB/s [2024-11-15T12:51:25.290Z] 8648.00 IOPS, 1081.00 MiB/s 00:16:16.620 Latency(us) 00:16:16.620 [2024-11-15T12:51:25.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.620 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:16.620 nvme0n1 : 2.00 8645.50 1080.69 0.00 0.00 1847.89 1645.85 4498.15 00:16:16.620 [2024-11-15T12:51:25.290Z] =================================================================================================================== 00:16:16.620 [2024-11-15T12:51:25.290Z] Total : 8645.50 1080.69 0.00 0.00 1847.89 1645.85 4498.15 00:16:16.620 { 00:16:16.620 "results": [ 00:16:16.620 { 00:16:16.620 "job": "nvme0n1", 00:16:16.620 "core_mask": "0x2", 00:16:16.620 "workload": "randread", 00:16:16.620 "status": "finished", 00:16:16.620 "queue_depth": 16, 00:16:16.620 "io_size": 131072, 00:16:16.620 "runtime": 2.002428, 00:16:16.620 "iops": 8645.50435770974, 00:16:16.620 "mibps": 1080.6880447137175, 00:16:16.620 "io_failed": 0, 00:16:16.620 "io_timeout": 0, 00:16:16.620 "avg_latency_us": 1847.8878843891785, 00:16:16.620 "min_latency_us": 1645.8472727272726, 00:16:16.620 "max_latency_us": 4498.152727272727 00:16:16.620 } 00:16:16.620 ], 00:16:16.620 "core_count": 1 00:16:16.620 } 00:16:16.620 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:16.620 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:16.620 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:16.620 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:16.620 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:16.620 | select(.opcode=="crc32c") 00:16:16.620 | "\(.module_name) \(.executed)"' 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79392 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79392 ']' 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79392 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79392 00:16:16.880 killing process with pid 79392 00:16:16.880 Received shutdown signal, test time was about 2.000000 seconds 00:16:16.880 00:16:16.880 Latency(us) 00:16:16.880 [2024-11-15T12:51:25.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.880 [2024-11-15T12:51:25.550Z] =================================================================================================================== 00:16:16.880 [2024-11-15T12:51:25.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79392' 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79392 00:16:16.880 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79392 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79445 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79445 /var/tmp/bperf.sock 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79445 ']' 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:17.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.139 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:17.139 [2024-11-15 12:51:25.690514] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:17.139 [2024-11-15 12:51:25.690771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79445 ] 00:16:17.398 [2024-11-15 12:51:25.827934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.398 [2024-11-15 12:51:25.858991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.398 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.398 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:17.398 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:17.398 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:17.398 12:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:17.657 [2024-11-15 12:51:26.181766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.657 12:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:17.657 12:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:17.915 nvme0n1 00:16:17.915 12:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:17.915 12:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:18.174 Running I/O for 2 seconds... 00:16:20.097 19178.00 IOPS, 74.91 MiB/s [2024-11-15T12:51:28.767Z] 19177.50 IOPS, 74.91 MiB/s 00:16:20.097 Latency(us) 00:16:20.097 [2024-11-15T12:51:28.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.097 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.097 nvme0n1 : 2.01 19185.38 74.94 0.00 0.00 6666.32 6225.92 14775.39 00:16:20.097 [2024-11-15T12:51:28.767Z] =================================================================================================================== 00:16:20.097 [2024-11-15T12:51:28.767Z] Total : 19185.38 74.94 0.00 0.00 6666.32 6225.92 14775.39 00:16:20.097 { 00:16:20.097 "results": [ 00:16:20.097 { 00:16:20.097 "job": "nvme0n1", 00:16:20.097 "core_mask": "0x2", 00:16:20.097 "workload": "randwrite", 00:16:20.097 "status": "finished", 00:16:20.097 "queue_depth": 128, 00:16:20.097 "io_size": 4096, 00:16:20.097 "runtime": 2.00585, 00:16:20.097 "iops": 19185.382755440336, 00:16:20.097 "mibps": 74.94290138843881, 00:16:20.097 "io_failed": 0, 00:16:20.097 "io_timeout": 0, 00:16:20.097 "avg_latency_us": 6666.324496152965, 00:16:20.097 "min_latency_us": 6225.92, 00:16:20.097 "max_latency_us": 14775.389090909091 00:16:20.097 } 00:16:20.097 ], 00:16:20.097 "core_count": 1 00:16:20.097 } 00:16:20.097 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:20.097 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:20.097 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:20.097 | select(.opcode=="crc32c") 00:16:20.097 | "\(.module_name) \(.executed)"' 00:16:20.097 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:20.097 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79445 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79445 ']' 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79445 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79445 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:20.356 killing process with pid 79445 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79445' 00:16:20.356 Received shutdown signal, test time was about 2.000000 seconds 00:16:20.356 00:16:20.356 Latency(us) 00:16:20.356 [2024-11-15T12:51:29.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.356 [2024-11-15T12:51:29.026Z] =================================================================================================================== 00:16:20.356 [2024-11-15T12:51:29.026Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79445 00:16:20.356 12:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79445 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79493 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79493 /var/tmp/bperf.sock 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79493 ']' 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:20.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.616 12:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:20.616 [2024-11-15 12:51:29.132373] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:20.616 [2024-11-15 12:51:29.132696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79493 ] 00:16:20.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:20.616 Zero copy mechanism will not be used. 00:16:20.616 [2024-11-15 12:51:29.276070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.875 [2024-11-15 12:51:29.306993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.442 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.442 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:21.442 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:21.442 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:21.442 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:21.701 [2024-11-15 12:51:30.246181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:21.701 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:21.701 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:21.960 nvme0n1 00:16:22.230 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:22.230 12:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:22.230 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:22.230 Zero copy mechanism will not be used. 00:16:22.230 Running I/O for 2 seconds... 00:16:24.136 7278.00 IOPS, 909.75 MiB/s [2024-11-15T12:51:32.806Z] 7339.50 IOPS, 917.44 MiB/s 00:16:24.136 Latency(us) 00:16:24.136 [2024-11-15T12:51:32.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.136 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:24.136 nvme0n1 : 2.00 7337.44 917.18 0.00 0.00 2175.78 1936.29 5213.09 00:16:24.136 [2024-11-15T12:51:32.806Z] =================================================================================================================== 00:16:24.136 [2024-11-15T12:51:32.806Z] Total : 7337.44 917.18 0.00 0.00 2175.78 1936.29 5213.09 00:16:24.136 { 00:16:24.136 "results": [ 00:16:24.136 { 00:16:24.136 "job": "nvme0n1", 00:16:24.136 "core_mask": "0x2", 00:16:24.136 "workload": "randwrite", 00:16:24.136 "status": "finished", 00:16:24.136 "queue_depth": 16, 00:16:24.136 "io_size": 131072, 00:16:24.136 "runtime": 2.002742, 00:16:24.136 "iops": 7337.4403692537535, 00:16:24.136 "mibps": 917.1800461567192, 00:16:24.136 "io_failed": 0, 00:16:24.136 "io_timeout": 0, 00:16:24.136 "avg_latency_us": 2175.7773610071454, 00:16:24.136 "min_latency_us": 1936.290909090909, 00:16:24.136 "max_latency_us": 5213.090909090909 00:16:24.136 } 00:16:24.136 ], 00:16:24.136 "core_count": 1 00:16:24.136 } 00:16:24.136 12:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:24.136 12:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:24.136 12:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:24.136 12:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:24.136 12:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:24.136 | select(.opcode=="crc32c") 00:16:24.136 | "\(.module_name) \(.executed)"' 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79493 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79493 ']' 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79493 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.394 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79493 00:16:24.653 killing process with pid 79493 00:16:24.653 Received shutdown signal, test time was about 2.000000 seconds 00:16:24.653 00:16:24.653 Latency(us) 00:16:24.653 [2024-11-15T12:51:33.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.653 [2024-11-15T12:51:33.323Z] =================================================================================================================== 00:16:24.653 [2024-11-15T12:51:33.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79493' 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79493 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79493 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79320 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79320 ']' 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79320 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79320 00:16:24.653 killing process with pid 79320 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79320' 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79320 00:16:24.653 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79320 00:16:24.912 00:16:24.912 real 0m15.219s 00:16:24.912 user 0m29.895s 00:16:24.912 sys 0m4.222s 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.912 ************************************ 00:16:24.912 END TEST nvmf_digest_clean 00:16:24.912 ************************************ 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 ************************************ 00:16:24.912 START TEST nvmf_digest_error 00:16:24.912 ************************************ 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79576 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79576 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79576 ']' 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.912 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 [2024-11-15 12:51:33.466463] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:24.912 [2024-11-15 12:51:33.466555] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.171 [2024-11-15 12:51:33.613383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.171 [2024-11-15 12:51:33.640464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.171 [2024-11-15 12:51:33.640517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.171 [2024-11-15 12:51:33.640543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.171 [2024-11-15 12:51:33.640549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.171 [2024-11-15 12:51:33.640555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.171 [2024-11-15 12:51:33.640872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.171 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.171 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:25.171 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:25.171 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.172 [2024-11-15 12:51:33.737285] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.172 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.172 [2024-11-15 12:51:33.776589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.172 null0 00:16:25.172 [2024-11-15 12:51:33.809592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.172 [2024-11-15 12:51:33.833753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79601 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79601 /var/tmp/bperf.sock 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79601 ']' 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.431 12:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.431 [2024-11-15 12:51:33.885426] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:25.431 [2024-11-15 12:51:33.885655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79601 ] 00:16:25.431 [2024-11-15 12:51:34.027674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.431 [2024-11-15 12:51:34.058148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.431 [2024-11-15 12:51:34.086484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.689 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.689 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:25.689 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:25.689 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:25.947 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:25.947 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.947 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.947 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.947 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:25.947 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:26.208 nvme0n1 00:16:26.208 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:26.208 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.208 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:26.208 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.208 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:26.208 12:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:26.466 Running I/O for 2 seconds... 00:16:26.466 [2024-11-15 12:51:34.940654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:34.940698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:34.940730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:34.954912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:34.954947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:34.954977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:34.968738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:34.968773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:34.968802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:34.982658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:34.982690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:34.982719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:34.996544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:34.996577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:34.996605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.010523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.010557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.010585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.024388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.024421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.024449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.038423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.038456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.038484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.052331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.052364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.052392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.066343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.066557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.066574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.080502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.080740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.080879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.095152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.095369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.095501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.109625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.109872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.110009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.466 [2024-11-15 12:51:35.124380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.466 [2024-11-15 12:51:35.124579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.466 [2024-11-15 12:51:35.124752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.140572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.140819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.140998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.155491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.155739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.155919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.170143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.170358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.170480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.185258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.185463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.185586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.202039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.202260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.202278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.218757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.218792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.218836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.234215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.234250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.234278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.249176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.249212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.249240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.264304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.264339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.264367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.279480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.279514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.279542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.294858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.294893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.294921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.309824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.309875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.309904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.324815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.324863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.324891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.339795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.339845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.339873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.354897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.354946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.354973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.369333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.369380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.369408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.726 [2024-11-15 12:51:35.383433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.726 [2024-11-15 12:51:35.383467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.726 [2024-11-15 12:51:35.383494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.399257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.399306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.399334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.413560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.413631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.413644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.428150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.428201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.428228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.442287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.442334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.442361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.456348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.456397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.456424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.470500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.470548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.470575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.484612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.484667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.484694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.498615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.498671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.498699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.512725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.512772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.512800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.526756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.526803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.526830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.540832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.540880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.540907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.555023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.555070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.555097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.569103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.569151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.569179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.583497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.583530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.583557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.597543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.597591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.597628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.611801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.611852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.611879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.625927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.625978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.626007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.985 [2024-11-15 12:51:35.640156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:26.985 [2024-11-15 12:51:35.640203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.985 [2024-11-15 12:51:35.640230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.655191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.655240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.655268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.669959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.670028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.670056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.684199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.684246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.684273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.698347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.698395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.698422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.713737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.713791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.713821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.730543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.730591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.730645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.746558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.746630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.746659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.760724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.760772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.760799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.774987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.775035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.775062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.788890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.788939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.788966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.802772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.802819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.802846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.816631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.816680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.816707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.830533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.830583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.830610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.844320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.844368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.844395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.864290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.864338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.864366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.878488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.878536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.878563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.892448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.892496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.892523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.245 [2024-11-15 12:51:35.906396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.245 [2024-11-15 12:51:35.906444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.245 [2024-11-15 12:51:35.906471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 17332.00 IOPS, 67.70 MiB/s [2024-11-15T12:51:36.175Z] [2024-11-15 12:51:35.922196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:35.922244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:35.922271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:35.936452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:35.936501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:35.936528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:35.950477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:35.950525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:35.950552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:35.965175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:35.965224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:35.965252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:35.979457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:35.979505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:35.979533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:35.993391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:35.993439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:35.993466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.007680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.007728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.007755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.021658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.021744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.021773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.035499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.035548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.035575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.049295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.049344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.049371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.063265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.063313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.063340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.077076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.077124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.077152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.091310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.091357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.091385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.105212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.105261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.105288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.119307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.119355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.119382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.133217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.505 [2024-11-15 12:51:36.133264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.505 [2024-11-15 12:51:36.133291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.505 [2024-11-15 12:51:36.147180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.506 [2024-11-15 12:51:36.147228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.506 [2024-11-15 12:51:36.147255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.506 [2024-11-15 12:51:36.161244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.506 [2024-11-15 12:51:36.161293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.506 [2024-11-15 12:51:36.161320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.176109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.176173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.176201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.190582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.190640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.190668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.204614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.204662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.204689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.218635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.218691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.218718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.232543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.232590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.232625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.246461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.246508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.246535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.260380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.260428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.260455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.274294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.274342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.274369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.288149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.288197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.765 [2024-11-15 12:51:36.288224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.765 [2024-11-15 12:51:36.302169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.765 [2024-11-15 12:51:36.302217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.302244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.316257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.316304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.316331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.330268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.330314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.330341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.344189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.344236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.344263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.359831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.359864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.359892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.376862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.376915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.376959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.393009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.393057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.393085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.408248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.408298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.408326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.766 [2024-11-15 12:51:36.423459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:27.766 [2024-11-15 12:51:36.423508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.766 [2024-11-15 12:51:36.423536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.440088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.440136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.440164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.455285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.455334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.455361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.470468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.470515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.470543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.485909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.485945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.485989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.500259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.500292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.500319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.514408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.514441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.514469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.528555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.528587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.528626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.542819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.542851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.542879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.556680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.556711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.556739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.570925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.570957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.570984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.584777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.584825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.584853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.599377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.599410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.599437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.613552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.613586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.613629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.627594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.627633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.627663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.641653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.641708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.641736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.655576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.655634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.655663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.669582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.669658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.669709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.026 [2024-11-15 12:51:36.683510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.026 [2024-11-15 12:51:36.683543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.026 [2024-11-15 12:51:36.683571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.698931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.698965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.698993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.713039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.713072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.713100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.727228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.727260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.727287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.743664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.743722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.743736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.760332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.760508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.760541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.776286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.776321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.776349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.796902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.796936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.796964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.811287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.811346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.825565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.825625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.825655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.839668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.839702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.839729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.853782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.853990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.854022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.868232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.868429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.868447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.882769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.882968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.883106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.897224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.897439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.897561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 [2024-11-15 12:51:36.912162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebc370) 00:16:28.287 [2024-11-15 12:51:36.912377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.287 [2024-11-15 12:51:36.912509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.287 17394.50 IOPS, 67.95 MiB/s 00:16:28.287 Latency(us) 00:16:28.287 [2024-11-15T12:51:36.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.287 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:28.287 nvme0n1 : 2.00 17419.12 68.04 0.00 0.00 7343.52 6642.97 27048.49 00:16:28.287 [2024-11-15T12:51:36.957Z] =================================================================================================================== 00:16:28.287 [2024-11-15T12:51:36.957Z] Total : 17419.12 68.04 0.00 0.00 7343.52 6642.97 27048.49 00:16:28.287 { 00:16:28.287 "results": [ 00:16:28.287 { 00:16:28.287 "job": "nvme0n1", 00:16:28.287 "core_mask": "0x2", 00:16:28.287 "workload": "randread", 00:16:28.287 "status": "finished", 00:16:28.287 "queue_depth": 128, 00:16:28.287 "io_size": 4096, 00:16:28.287 "runtime": 2.004521, 00:16:28.287 "iops": 17419.124070039674, 00:16:28.287 "mibps": 68.04345339859248, 00:16:28.287 "io_failed": 0, 00:16:28.287 "io_timeout": 0, 00:16:28.287 "avg_latency_us": 7343.515967996834, 00:16:28.287 "min_latency_us": 6642.967272727273, 00:16:28.287 "max_latency_us": 27048.494545454545 00:16:28.287 } 00:16:28.287 ], 00:16:28.287 "core_count": 1 00:16:28.287 } 00:16:28.287 12:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:28.287 12:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:28.287 12:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:28.287 | .driver_specific 00:16:28.287 | .nvme_error 00:16:28.287 | .status_code 00:16:28.287 | .command_transient_transport_error' 00:16:28.287 12:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79601 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79601 ']' 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79601 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79601 00:16:28.856 killing process with pid 79601 00:16:28.856 Received shutdown signal, test time was about 2.000000 seconds 00:16:28.856 00:16:28.856 Latency(us) 00:16:28.856 [2024-11-15T12:51:37.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.856 [2024-11-15T12:51:37.526Z] =================================================================================================================== 00:16:28.856 [2024-11-15T12:51:37.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79601' 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79601 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79601 00:16:28.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79648 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79648 /var/tmp/bperf.sock 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79648 ']' 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.856 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:28.856 [2024-11-15 12:51:37.440022] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:28.856 [2024-11-15 12:51:37.440782] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79648 ] 00:16:28.856 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:28.856 Zero copy mechanism will not be used. 00:16:29.114 [2024-11-15 12:51:37.586602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.114 [2024-11-15 12:51:37.616061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.114 [2024-11-15 12:51:37.643985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.114 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.114 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:29.114 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:29.114 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:29.372 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:29.372 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.372 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:29.372 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.372 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:29.372 12:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:29.631 nvme0n1 00:16:29.631 12:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:29.631 12:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.631 12:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:29.631 12:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.631 12:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:29.631 12:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:29.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:29.891 Zero copy mechanism will not be used. 00:16:29.891 Running I/O for 2 seconds... 00:16:29.891 [2024-11-15 12:51:38.384249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.891 [2024-11-15 12:51:38.384310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.891 [2024-11-15 12:51:38.384324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.891 [2024-11-15 12:51:38.388306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.891 [2024-11-15 12:51:38.388342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.891 [2024-11-15 12:51:38.388371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.891 [2024-11-15 12:51:38.392430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.891 [2024-11-15 12:51:38.392465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.891 [2024-11-15 12:51:38.392493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.396452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.396487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.396516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.400412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.400446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.400475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.404403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.404438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.404466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.408421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.408456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.408484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.412380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.412414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.412442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.416448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.416482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.416511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.420429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.420462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.420490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.424621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.424700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.424714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.428547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.428609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.432485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.432519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.436475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.436508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.436537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.440613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.440677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.440706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.444800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.444835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.444847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.448867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.448902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.448914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.452673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.452705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.452734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.456546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.456581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.456608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.460455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.460489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.460517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.464407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.464456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.464484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.468363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.468397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.468426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.472285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.472319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.472347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.476215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.476249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.476277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.480116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.480150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.480178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.484083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.484116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.484145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.488043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.488077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.488105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.492028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.892 [2024-11-15 12:51:38.492062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.892 [2024-11-15 12:51:38.492090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.892 [2024-11-15 12:51:38.495917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.495952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.495980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.499772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.499805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.499833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.503735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.503769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.503797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.507634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.507668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.507697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.511480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.511672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.511705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.515716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.515750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.515779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.519593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.519651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.519679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.523472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.523697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.523730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.527464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.527494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.527522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.531501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.531748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.531945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.535898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.536085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.536221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.540239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.540427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.540583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.544605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.544855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.544977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.549018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.549233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.549364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.553391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.553567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.553814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.893 [2024-11-15 12:51:38.558408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:29.893 [2024-11-15 12:51:38.558628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.893 [2024-11-15 12:51:38.558869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.152 [2024-11-15 12:51:38.563239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.152 [2024-11-15 12:51:38.563430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.152 [2024-11-15 12:51:38.563564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.152 [2024-11-15 12:51:38.568119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.152 [2024-11-15 12:51:38.568314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.152 [2024-11-15 12:51:38.568443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.152 [2024-11-15 12:51:38.572514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.152 [2024-11-15 12:51:38.572728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.572745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.576673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.576706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.576735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.580584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.580643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.580657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.584439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.584473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.584501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.588490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.588525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.588554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.592463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.592496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.592524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.596423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.596456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.596485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.600386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.600419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.600447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.604361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.604395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.604423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.608395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.608429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.608457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.612369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.612402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.612430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.616436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.616470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.620404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.620438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.620466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.624448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.624483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.624512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.628420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.628454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.628482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.632399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.632433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.632461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.636526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.636559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.636588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.640474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.640507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.640535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.644419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.644452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.648408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.648442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.648471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.652315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.652349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.652377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.656362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.656395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.656423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.660310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.660344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.660373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.664279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.664312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.664340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.668754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.668790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.668820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.673096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.673131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.673160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.677344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.153 [2024-11-15 12:51:38.677379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.153 [2024-11-15 12:51:38.677407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.153 [2024-11-15 12:51:38.681854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.681894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.681909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.686327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.686362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.686391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.690869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.690906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.690936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.695201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.695235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.695263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.699664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.699713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.699743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.704068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.704103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.704131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.708348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.708383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.708412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.712578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.712672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.712686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.716879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.716916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.716930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.721007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.721042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.721070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.725312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.725347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.725376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.729354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.729389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.729417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.733378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.733413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.733441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.737507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.737542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.737570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.741746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.741783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.741812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.745754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.745791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.745820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.749732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.749768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.749798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.753933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.753971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.753985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.758038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.758089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.758117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.762016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.762081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.762110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.766119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.766154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.766181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.770265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.770299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.770328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.774346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.774381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.774409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.778574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.778637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.778653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.783134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.783185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.783214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.787579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.787708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.792074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.154 [2024-11-15 12:51:38.792126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.154 [2024-11-15 12:51:38.792154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.154 [2024-11-15 12:51:38.796797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.155 [2024-11-15 12:51:38.796836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.155 [2024-11-15 12:51:38.796866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.155 [2024-11-15 12:51:38.801458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.155 [2024-11-15 12:51:38.801508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.155 [2024-11-15 12:51:38.801536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.155 [2024-11-15 12:51:38.805856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.155 [2024-11-15 12:51:38.805911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.155 [2024-11-15 12:51:38.805924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.155 [2024-11-15 12:51:38.810332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.155 [2024-11-15 12:51:38.810382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.155 [2024-11-15 12:51:38.810410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.155 [2024-11-15 12:51:38.814855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.155 [2024-11-15 12:51:38.814924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.155 [2024-11-15 12:51:38.814954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.155 [2024-11-15 12:51:38.819582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.155 [2024-11-15 12:51:38.819660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.155 [2024-11-15 12:51:38.819704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.415 [2024-11-15 12:51:38.824198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.415 [2024-11-15 12:51:38.824247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.415 [2024-11-15 12:51:38.824275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.415 [2024-11-15 12:51:38.828726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.415 [2024-11-15 12:51:38.828787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.415 [2024-11-15 12:51:38.828816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.415 [2024-11-15 12:51:38.832798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.832847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.832875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.836834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.836872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.836902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.840953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.841020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.841047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.844948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.845013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.845040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.848907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.848956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.848983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.852820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.852895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.856771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.856820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.856847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.860660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.860708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.860735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.864522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.864571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.864598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.868447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.868495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.868523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.872578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.872640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.872668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.876487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.876535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.876563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.880503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.880551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.880579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.884356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.884404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.884431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.888232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.888280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.888308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.892166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.892214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.892242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.895983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.896048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.896075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.899858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.899907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.899935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.903716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.903764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.903792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.907654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.907702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.907730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.911461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.911509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.911537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.915422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.915471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.915513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.919494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.919543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.919570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.923524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.923574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.923601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.927424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.927473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.927500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.931297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.931346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.931373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.416 [2024-11-15 12:51:38.935915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.416 [2024-11-15 12:51:38.935985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.416 [2024-11-15 12:51:38.936033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.941355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.941409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.941422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.947013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.947081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.947094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.951928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.951996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.956031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.956083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.956094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.959915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.959966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.959994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.963750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.963799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.963826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.967588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.967663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.967692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.971423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.971471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.971498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.975376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.975423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.975450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.979289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.979337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.979365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.983235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.983284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.983311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.987192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.987241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.987268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.991160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.991208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.991236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.995052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.995100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.995128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:38.998936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:38.998986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:38.999014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.003015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.003065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.003092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.007077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.007127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.007155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.011031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.011080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.011107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.014985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.015034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.015062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.018987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.019035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.019063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.022831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.022880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.022907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.026720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.026767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.026794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.030694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.030741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.030768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.034531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.034579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.034606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.038498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.038549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.038576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.042494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.417 [2024-11-15 12:51:39.042543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.417 [2024-11-15 12:51:39.042570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.417 [2024-11-15 12:51:39.046457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.046505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.046533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.050340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.050388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.054349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.054397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.054424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.058352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.058401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.058429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.062294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.062343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.062370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.066334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.066382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.066409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.070211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.070259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.070286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.074147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.074195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.074222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.078089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.078153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.078181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.418 [2024-11-15 12:51:39.082435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.418 [2024-11-15 12:51:39.082483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.418 [2024-11-15 12:51:39.082510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.678 [2024-11-15 12:51:39.086628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.678 [2024-11-15 12:51:39.086686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.678 [2024-11-15 12:51:39.086714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.090939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.090987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.091014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.094787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.094834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.094860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.098658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.098715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.098744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.102626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.102683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.102711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.106446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.106495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.106522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.110368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.110417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.110445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.114364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.114413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.114440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.118358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.118406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.118433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.122342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.122390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.122417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.126385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.126434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.126461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.130336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.130384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.130411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.134244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.134292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.134319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.138239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.138287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.138314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.142258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.142306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.142333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.146216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.146266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.146293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.150235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.150285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.150312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.154214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.154262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.154289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.158183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.158231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.158258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.162111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.162160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.162187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.165941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.165992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.166036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.169857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.169893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.169921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.173660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.173745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.173773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.177490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.177537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.177564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.181364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.181413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.181441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.185263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.185311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.185338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.189202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.679 [2024-11-15 12:51:39.189251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.679 [2024-11-15 12:51:39.189278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.679 [2024-11-15 12:51:39.193113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.193162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.193189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.196953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.197001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.197028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.200832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.200879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.200907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.204691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.204739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.204766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.208569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.208643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.208657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.212415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.212464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.212490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.216441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.216490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.216517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.220321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.220370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.220398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.224344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.224393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.224420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.228228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.228277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.228304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.232124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.232172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.232199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.235970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.236035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.236063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.239865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.239914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.239941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.243727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.243779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.243806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.247584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.247685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.251529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.251578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.251606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.255383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.255432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.255475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.259331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.259381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.259408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.263310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.263358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.263385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.267152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.267200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.267227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.271225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.271273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.271300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.275173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.275221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.275248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.279080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.279129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.279156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.282943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.282992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.283019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.286934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.286983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.287010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.290896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.290944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.290971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.294777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.294824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.680 [2024-11-15 12:51:39.294852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.680 [2024-11-15 12:51:39.298577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.680 [2024-11-15 12:51:39.298635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.298662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.302442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.302491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.302518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.306320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.306369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.306396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.310286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.310333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.310360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.314222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.314269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.314296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.318153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.318202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.318229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.321974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.322054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.322082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.325852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.325887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.325915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.329629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.329698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.329711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.333432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.333480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.333507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.337330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.337379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.337406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.341226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.341275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.341302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.681 [2024-11-15 12:51:39.345597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.681 [2024-11-15 12:51:39.345709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.681 [2024-11-15 12:51:39.345723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.941 [2024-11-15 12:51:39.349874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.941 [2024-11-15 12:51:39.349912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.941 [2024-11-15 12:51:39.349924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.941 [2024-11-15 12:51:39.354102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.941 [2024-11-15 12:51:39.354150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.941 [2024-11-15 12:51:39.354177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.941 [2024-11-15 12:51:39.357979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.941 [2024-11-15 12:51:39.358045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.941 [2024-11-15 12:51:39.358056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.941 [2024-11-15 12:51:39.361861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.941 [2024-11-15 12:51:39.361896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.941 [2024-11-15 12:51:39.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.941 [2024-11-15 12:51:39.365644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.941 [2024-11-15 12:51:39.365715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.941 [2024-11-15 12:51:39.365760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.941 [2024-11-15 12:51:39.369528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.941 [2024-11-15 12:51:39.369576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.941 [2024-11-15 12:51:39.369603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.373405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.373455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.373482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.942 7595.00 IOPS, 949.38 MiB/s [2024-11-15T12:51:39.612Z] [2024-11-15 12:51:39.378604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.378679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.378708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.382610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.382667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.382695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.386510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.386558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.386586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.390471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.390521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.390548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.394366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.394414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.394440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.398257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.398304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.398331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.402186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.402235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.402262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.406136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.406184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.406212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.410105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.410154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.410181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.413917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.413954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.413983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.417849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.417886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.417916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.421753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.421806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.421820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.425573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.425630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.425659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.429463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.429511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.429539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.433360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.433409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.433436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.437223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.437272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.437299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.441101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.441150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.441177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.445091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.445140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.445166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.449095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.449144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.449171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.453132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.453182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.453209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.457015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.457064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.457092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.460852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.460901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.942 [2024-11-15 12:51:39.460929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.942 [2024-11-15 12:51:39.464753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.942 [2024-11-15 12:51:39.464801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.464829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.468645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.468693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.468721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.472516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.472565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.472592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.476358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.476407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.476434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.480292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.480340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.480367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.484259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.484307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.484334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.488230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.488279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.488306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.492181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.492229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.492256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.496098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.496148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.496175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.499927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.499977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.500018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.503790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.503838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.503867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.507796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.507845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.507872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.511742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.511791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.511819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.515627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.515675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.515703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.519461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.519510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.519537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.523378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.523426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.523453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.527374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.527422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.527450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.531266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.531314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.531341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.535219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.535267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.535294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.539229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.539279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.539306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.543401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.543450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.543478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.547493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.547543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.547570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.551400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.551448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.551475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.555352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.555400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.559397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.559447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.559474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.563361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.943 [2024-11-15 12:51:39.563410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.943 [2024-11-15 12:51:39.563437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.943 [2024-11-15 12:51:39.567328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.567376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.567403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.571244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.571292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.571319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.575226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.575275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.575302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.579085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.579133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.579160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.582946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.582993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.583020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.586750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.586798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.586825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.590630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.590688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.590717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.594472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.594519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.594546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.598403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.598479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.602362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.602411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.602438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:30.944 [2024-11-15 12:51:39.606462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:30.944 [2024-11-15 12:51:39.606510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:30.944 [2024-11-15 12:51:39.606537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.610840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.610887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.610915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.614912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.614961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.614972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.619034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.619083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.619110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.622976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.623024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.623051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.626936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.626984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.627012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.630871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.630919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.630946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.634779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.634826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.634854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.638920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.638969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.638996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.642777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.642826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.642854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.646670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.646717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.646744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.650529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.650578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.650605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.654347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.654394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.654421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.658315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.658362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.658389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.662332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.662380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.662408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.666350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.666398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.666425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.670297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.670345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.670372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.674231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.674279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.674306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.678183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.678230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.678258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.681976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.682046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.682073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.685851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.685903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.685916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.689698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.689763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.689792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.693490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.693539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.693565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.697413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.697462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.697490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.701358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.701408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.701435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.705371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.705421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.205 [2024-11-15 12:51:39.705448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.205 [2024-11-15 12:51:39.709249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.205 [2024-11-15 12:51:39.709298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.709326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.713289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.713339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.713367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.717260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.717310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.717338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.721184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.721232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.721259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.725096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.725144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.725172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.728958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.729006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.729034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.732857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.732905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.732932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.736751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.736799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.736826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.740609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.740669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.740697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.744472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.744521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.744548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.748457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.748506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.748533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.752373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.752421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.752448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.756335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.756400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.756428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.760218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.760266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.760294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.764153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.764201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.764229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.768049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.768097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.768124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.771952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.772018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.772045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.775833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.775882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.775909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.779693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.779742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.779770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.783593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.783666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.783695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.787496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.787544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.787571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.791502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.791578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.795433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.795481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.795509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.799404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.799453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.799480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.803928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.803995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.804023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.808167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.808216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.808243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.812470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.812521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.206 [2024-11-15 12:51:39.812550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.206 [2024-11-15 12:51:39.816605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.206 [2024-11-15 12:51:39.816683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.816713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.821257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.821292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.821320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.825710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.825763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.825793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.830175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.830223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.830250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.834450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.834499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.834527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.838698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.838757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.838786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.843182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.843231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.843258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.847553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.847627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.847658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.851828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.851878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.851906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.856481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.856532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.856560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.860897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.860963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.861007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.865244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.865293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.865321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.207 [2024-11-15 12:51:39.869809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.207 [2024-11-15 12:51:39.869850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.207 [2024-11-15 12:51:39.869864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.874656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.467 [2024-11-15 12:51:39.874719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.467 [2024-11-15 12:51:39.874747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.878913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.467 [2024-11-15 12:51:39.878979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.467 [2024-11-15 12:51:39.878992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.883382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.467 [2024-11-15 12:51:39.883433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.467 [2024-11-15 12:51:39.883461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.887519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.467 [2024-11-15 12:51:39.887568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.467 [2024-11-15 12:51:39.887596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.891655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.467 [2024-11-15 12:51:39.891704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.467 [2024-11-15 12:51:39.891732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.895754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.467 [2024-11-15 12:51:39.895807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.467 [2024-11-15 12:51:39.895821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.899773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.467 [2024-11-15 12:51:39.899821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.467 [2024-11-15 12:51:39.899848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.467 [2024-11-15 12:51:39.903641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.903689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.903718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.907592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.907650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.907678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.911785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.911833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.911862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.915754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.915803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.915830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.919895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.919945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.919972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.923918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.923967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.923994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.928155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.928204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.928231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.932171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.932220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.932248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.936221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.936270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.936298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.940427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.940476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.940504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.944499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.944548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.944575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.948595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.948655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.948684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.952752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.952801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.952829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.957116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.957167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.957194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.961102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.961152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.961180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.965079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.965128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.965156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.969085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.969135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.969163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.973260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.973311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.973339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.977212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.977261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.977289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.981191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.981240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.981268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.985252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.985302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.985329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.989351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.989401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.989428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.993434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.993500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.993513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:39.997476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:39.997526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:39.997554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:40.001826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:40.001866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:40.001879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:40.006252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:40.006306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:40.006335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:40.010612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.468 [2024-11-15 12:51:40.010663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.468 [2024-11-15 12:51:40.010678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.468 [2024-11-15 12:51:40.015852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.015939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.015969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.021211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.021265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.025409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.025460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.025488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.029587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.029647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.029700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.033765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.033803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.033817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.037981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.038033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.038060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.042074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.042136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.042163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.045981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.046020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.046048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.049989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.050074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.050102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.053999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.054097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.054108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.057959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.058028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.058055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.061872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.061912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.061925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.065915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.065955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.065969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.070088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.070151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.070179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.074239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.074289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.074316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.078201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.078250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.078277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.082132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.082179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.082206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.085996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.086062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.086089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.089943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.089995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.090020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.093857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.093907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.093936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.097730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.097765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.097794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.101528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.101575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.101602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.105412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.105459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.105487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.109365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.109412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.109439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.113291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.113340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.113367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.117256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.117304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.117331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.121197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.121245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.469 [2024-11-15 12:51:40.121272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.469 [2024-11-15 12:51:40.125137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.469 [2024-11-15 12:51:40.125187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.470 [2024-11-15 12:51:40.125229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.470 [2024-11-15 12:51:40.129023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.470 [2024-11-15 12:51:40.129072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.470 [2024-11-15 12:51:40.129099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.470 [2024-11-15 12:51:40.133317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.470 [2024-11-15 12:51:40.133367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.470 [2024-11-15 12:51:40.133379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.137577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.137635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.137663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.142285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.142351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.142378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.146337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.146385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.146412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.150322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.150370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.150397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.154258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.154305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.154332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.158271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.158319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.158346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.162228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.162277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.162304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.166140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.166188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.166216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.170140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.170191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.170219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.174114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.174190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.178049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.178129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.178155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.730 [2024-11-15 12:51:40.181950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.730 [2024-11-15 12:51:40.181988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.730 [2024-11-15 12:51:40.182032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.185804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.185843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.185856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.189676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.189759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.189788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.193556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.193628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.193641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.197478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.197526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.197553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.201417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.201464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.201492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.205357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.205405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.205432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.209247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.209295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.209322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.213306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.213354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.213381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.217175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.217222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.217250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.221127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.221175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.221202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.224994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.225043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.225071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.228922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.228971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.228998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.232869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.232917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.232944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.236745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.236792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.236819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.240686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.240734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.240760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.244574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.244630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.244659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.248469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.248516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.248543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.252347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.252395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.252422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.256318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.256366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.256393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.260236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.260284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.260311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.264191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.264239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.264267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.268103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.268151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.268179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.272030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.272079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.272106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.275952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.276000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.276027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.279891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.279939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.279966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.283793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.283840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.731 [2024-11-15 12:51:40.283867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.731 [2024-11-15 12:51:40.287756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.731 [2024-11-15 12:51:40.287803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.287830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.291701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.291749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.291776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.295575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.295633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.295662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.299463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.299511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.299538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.303431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.303479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.303507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.307460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.307509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.307536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.311476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.311525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.311552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.315431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.315496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.315523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.319424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.319473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.319500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.323395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.323444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.323471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.327512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.327562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.327589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.331516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.331565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.331591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.335427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.335476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.335503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.339456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.339504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.339531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.343443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.343491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.343518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.347474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.347523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.347550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.351435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.351484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.351511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.355406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.355454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.355482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.359406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.359453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.359481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.363284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.363332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.363359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.367243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.367292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.367319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.371212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.371260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.371286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:31.732 [2024-11-15 12:51:40.375173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2497400) 00:16:31.732 [2024-11-15 12:51:40.375222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.732 [2024-11-15 12:51:40.375249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:31.732 7649.00 IOPS, 956.12 MiB/s 00:16:31.732 Latency(us) 00:16:31.732 [2024-11-15T12:51:40.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.732 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:31.732 nvme0n1 : 2.00 7647.27 955.91 0.00 0.00 2089.26 1757.56 10545.34 00:16:31.732 [2024-11-15T12:51:40.402Z] =================================================================================================================== 00:16:31.732 [2024-11-15T12:51:40.402Z] Total : 7647.27 955.91 0.00 0.00 2089.26 1757.56 10545.34 00:16:31.732 { 00:16:31.732 "results": [ 00:16:31.732 { 00:16:31.732 "job": "nvme0n1", 00:16:31.732 "core_mask": "0x2", 00:16:31.732 "workload": "randread", 00:16:31.732 "status": "finished", 00:16:31.732 "queue_depth": 16, 00:16:31.732 "io_size": 131072, 00:16:31.732 "runtime": 2.002546, 00:16:31.732 "iops": 7647.265031614755, 00:16:31.732 "mibps": 955.9081289518443, 00:16:31.732 "io_failed": 0, 00:16:31.732 "io_timeout": 0, 00:16:31.732 "avg_latency_us": 2089.2615090172985, 00:16:31.732 "min_latency_us": 1757.5563636363636, 00:16:31.732 "max_latency_us": 10545.338181818182 00:16:31.732 } 00:16:31.732 ], 00:16:31.732 "core_count": 1 00:16:31.732 } 00:16:31.991 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:31.991 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:31.991 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:31.991 | .driver_specific 00:16:31.991 | .nvme_error 00:16:31.991 | .status_code 00:16:31.991 | .command_transient_transport_error' 00:16:31.991 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 494 > 0 )) 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79648 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79648 ']' 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79648 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79648 00:16:32.250 killing process with pid 79648 00:16:32.250 Received shutdown signal, test time was about 2.000000 seconds 00:16:32.250 00:16:32.250 Latency(us) 00:16:32.250 [2024-11-15T12:51:40.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.250 [2024-11-15T12:51:40.920Z] =================================================================================================================== 00:16:32.250 [2024-11-15T12:51:40.920Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:32.250 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79648' 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79648 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79648 00:16:32.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79701 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79701 /var/tmp/bperf.sock 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79701 ']' 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.251 12:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:32.251 [2024-11-15 12:51:40.893221] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:32.251 [2024-11-15 12:51:40.893322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79701 ] 00:16:32.509 [2024-11-15 12:51:41.030364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.509 [2024-11-15 12:51:41.061169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.509 [2024-11-15 12:51:41.089483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.509 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.509 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:32.509 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:32.509 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:32.767 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:32.767 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.767 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:32.767 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.767 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:32.767 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:33.335 nvme0n1 00:16:33.335 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:33.335 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.335 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:33.335 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.335 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:33.335 12:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:33.335 Running I/O for 2 seconds... 00:16:33.335 [2024-11-15 12:51:41.846948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f7100 00:16:33.335 [2024-11-15 12:51:41.848542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.335 [2024-11-15 12:51:41.848597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:33.335 [2024-11-15 12:51:41.862525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f7970 00:16:33.335 [2024-11-15 12:51:41.864167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.335 [2024-11-15 12:51:41.864234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.335 [2024-11-15 12:51:41.878688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f81e0 00:16:33.335 [2024-11-15 12:51:41.880315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.335 [2024-11-15 12:51:41.880364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:33.335 [2024-11-15 12:51:41.893566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f8a50 00:16:33.336 [2024-11-15 12:51:41.895208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.895255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:41.908304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f92c0 00:16:33.336 [2024-11-15 12:51:41.909885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.909924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:41.922287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f9b30 00:16:33.336 [2024-11-15 12:51:41.923794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.923840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:41.935930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fa3a0 00:16:33.336 [2024-11-15 12:51:41.937308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.937354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:41.949463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fac10 00:16:33.336 [2024-11-15 12:51:41.950957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.951004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:41.962876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fb480 00:16:33.336 [2024-11-15 12:51:41.964223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.964269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:41.976304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fbcf0 00:16:33.336 [2024-11-15 12:51:41.977721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.977769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:41.989571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fc560 00:16:33.336 [2024-11-15 12:51:41.991057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.336 [2024-11-15 12:51:41.991103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:33.336 [2024-11-15 12:51:42.003563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fcdd0 00:16:33.595 [2024-11-15 12:51:42.005071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.005137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.017663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fd640 00:16:33.595 [2024-11-15 12:51:42.019029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.019075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.031233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fdeb0 00:16:33.595 [2024-11-15 12:51:42.032506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.032552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.044586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fe720 00:16:33.595 [2024-11-15 12:51:42.045915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.045963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.057958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ff3c8 00:16:33.595 [2024-11-15 12:51:42.059279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.059325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.076812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ff3c8 00:16:33.595 [2024-11-15 12:51:42.079120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.079167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.090185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fe720 00:16:33.595 [2024-11-15 12:51:42.092416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.092477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.103529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fdeb0 00:16:33.595 [2024-11-15 12:51:42.105737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.105786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.116807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fd640 00:16:33.595 [2024-11-15 12:51:42.119038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.119084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.130223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fcdd0 00:16:33.595 [2024-11-15 12:51:42.132346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.132391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.143557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fc560 00:16:33.595 [2024-11-15 12:51:42.145729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.145778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.156953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fbcf0 00:16:33.595 [2024-11-15 12:51:42.159228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.159274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.170485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fb480 00:16:33.595 [2024-11-15 12:51:42.172616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.172670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.183855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fac10 00:16:33.595 [2024-11-15 12:51:42.185985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.186033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.197816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fa3a0 00:16:33.595 [2024-11-15 12:51:42.200069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.200116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.213080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f9b30 00:16:33.595 [2024-11-15 12:51:42.215445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.215478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.228264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f92c0 00:16:33.595 [2024-11-15 12:51:42.230568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.230636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.242924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f8a50 00:16:33.595 [2024-11-15 12:51:42.245012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.245059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:33.595 [2024-11-15 12:51:42.257082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f81e0 00:16:33.595 [2024-11-15 12:51:42.259284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.595 [2024-11-15 12:51:42.259347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.272564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f7970 00:16:33.855 [2024-11-15 12:51:42.274795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.274842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.286873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f7100 00:16:33.855 [2024-11-15 12:51:42.288912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.288959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.301091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f6890 00:16:33.855 [2024-11-15 12:51:42.303160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.303208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.315163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f6020 00:16:33.855 [2024-11-15 12:51:42.317176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.317222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.329339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f57b0 00:16:33.855 [2024-11-15 12:51:42.331413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.331460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.343545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f4f40 00:16:33.855 [2024-11-15 12:51:42.345524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.345571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.357733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f46d0 00:16:33.855 [2024-11-15 12:51:42.359756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.359801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.371951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f3e60 00:16:33.855 [2024-11-15 12:51:42.374021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.374083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.386154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f35f0 00:16:33.855 [2024-11-15 12:51:42.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.388053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.399754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f2d80 00:16:33.855 [2024-11-15 12:51:42.401588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.401656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.413205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f2510 00:16:33.855 [2024-11-15 12:51:42.415142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.415188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.426636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f1ca0 00:16:33.855 [2024-11-15 12:51:42.428504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.428550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.440028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f1430 00:16:33.855 [2024-11-15 12:51:42.441952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.442000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.453491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f0bc0 00:16:33.855 [2024-11-15 12:51:42.455390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.455434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.467095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f0350 00:16:33.855 [2024-11-15 12:51:42.468898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.468945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.480411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166efae0 00:16:33.855 [2024-11-15 12:51:42.482290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.482335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.493977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ef270 00:16:33.855 [2024-11-15 12:51:42.495747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.495794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.507260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166eea00 00:16:33.855 [2024-11-15 12:51:42.508980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.509026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:33.855 [2024-11-15 12:51:42.520861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ee190 00:16:33.855 [2024-11-15 12:51:42.522847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.855 [2024-11-15 12:51:42.522893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.535301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ed920 00:16:34.115 [2024-11-15 12:51:42.537032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.537078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.548987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ed0b0 00:16:34.115 [2024-11-15 12:51:42.550770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.550816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.562520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ec840 00:16:34.115 [2024-11-15 12:51:42.564179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.564224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.576064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ebfd0 00:16:34.115 [2024-11-15 12:51:42.577752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.577800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.589415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166eb760 00:16:34.115 [2024-11-15 12:51:42.591220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.591267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.602982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166eaef0 00:16:34.115 [2024-11-15 12:51:42.604590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.604643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.616339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ea680 00:16:34.115 [2024-11-15 12:51:42.618075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.618135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.629807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e9e10 00:16:34.115 [2024-11-15 12:51:42.631426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.631471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.643364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e95a0 00:16:34.115 [2024-11-15 12:51:42.645060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.645106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.656748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e8d30 00:16:34.115 [2024-11-15 12:51:42.658412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.658457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.670283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e84c0 00:16:34.115 [2024-11-15 12:51:42.671854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.671900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.683684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e7c50 00:16:34.115 [2024-11-15 12:51:42.685208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.685255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.697098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e73e0 00:16:34.115 [2024-11-15 12:51:42.698732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.698779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.710466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e6b70 00:16:34.115 [2024-11-15 12:51:42.712004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.712051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.723802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e6300 00:16:34.115 [2024-11-15 12:51:42.725310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.725356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.737158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e5a90 00:16:34.115 [2024-11-15 12:51:42.738694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.750502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e5220 00:16:34.115 [2024-11-15 12:51:42.751987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.752032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.763910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e49b0 00:16:34.115 [2024-11-15 12:51:42.765343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.765389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:34.115 [2024-11-15 12:51:42.777235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e4140 00:16:34.115 [2024-11-15 12:51:42.778738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.115 [2024-11-15 12:51:42.778783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.792059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e38d0 00:16:34.375 [2024-11-15 12:51:42.793463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.793511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.805488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e3060 00:16:34.375 [2024-11-15 12:51:42.807034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.807082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.818970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e27f0 00:16:34.375 [2024-11-15 12:51:42.820341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.820388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.832414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e1f80 00:16:34.375 [2024-11-15 12:51:42.833849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.833898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:34.375 18344.00 IOPS, 71.66 MiB/s [2024-11-15T12:51:43.045Z] [2024-11-15 12:51:42.847172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e1710 00:16:34.375 [2024-11-15 12:51:42.848529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.848575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.861191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e0ea0 00:16:34.375 [2024-11-15 12:51:42.862649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.862703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.875629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e0630 00:16:34.375 [2024-11-15 12:51:42.877133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.877180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.891331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166dfdc0 00:16:34.375 [2024-11-15 12:51:42.892838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.892871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.906757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166df550 00:16:34.375 [2024-11-15 12:51:42.908138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.908184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.920910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166dece0 00:16:34.375 [2024-11-15 12:51:42.922322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.922369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.934646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166de470 00:16:34.375 [2024-11-15 12:51:42.935911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.935958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.953674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ddc00 00:16:34.375 [2024-11-15 12:51:42.955936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.955983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.967496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166de470 00:16:34.375 [2024-11-15 12:51:42.969808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.969844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.980991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166dece0 00:16:34.375 [2024-11-15 12:51:42.983301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.983347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:42.994825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166df550 00:16:34.375 [2024-11-15 12:51:42.996979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:42.997026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:43.008268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166dfdc0 00:16:34.375 [2024-11-15 12:51:43.010577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:43.010635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:43.021811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e0630 00:16:34.375 [2024-11-15 12:51:43.023992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:43.024038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:34.375 [2024-11-15 12:51:43.035501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e0ea0 00:16:34.375 [2024-11-15 12:51:43.037628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.375 [2024-11-15 12:51:43.037696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.050177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e1710 00:16:34.635 [2024-11-15 12:51:43.052265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.052312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.063759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e1f80 00:16:34.635 [2024-11-15 12:51:43.065893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.065942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.077160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e27f0 00:16:34.635 [2024-11-15 12:51:43.079287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.079332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.090569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e3060 00:16:34.635 [2024-11-15 12:51:43.092685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.092759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.104052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e38d0 00:16:34.635 [2024-11-15 12:51:43.106181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.106227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.117462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e4140 00:16:34.635 [2024-11-15 12:51:43.119628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.119681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.130943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e49b0 00:16:34.635 [2024-11-15 12:51:43.132938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.132984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.144224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e5220 00:16:34.635 [2024-11-15 12:51:43.146303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.146349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.157550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e5a90 00:16:34.635 [2024-11-15 12:51:43.159597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.159650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.170944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e6300 00:16:34.635 [2024-11-15 12:51:43.172898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.172943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.184205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e6b70 00:16:34.635 [2024-11-15 12:51:43.186275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.186320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.197796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e73e0 00:16:34.635 [2024-11-15 12:51:43.199767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.199813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.211147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e7c50 00:16:34.635 [2024-11-15 12:51:43.213149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.213194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.224468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e84c0 00:16:34.635 [2024-11-15 12:51:43.226528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.226574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.237886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e8d30 00:16:34.635 [2024-11-15 12:51:43.239797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.239842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.251186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e95a0 00:16:34.635 [2024-11-15 12:51:43.253047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.253092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.264543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166e9e10 00:16:34.635 [2024-11-15 12:51:43.266483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.266527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.278144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ea680 00:16:34.635 [2024-11-15 12:51:43.280014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.635 [2024-11-15 12:51:43.280059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:34.635 [2024-11-15 12:51:43.291592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166eaef0 00:16:34.635 [2024-11-15 12:51:43.293494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.636 [2024-11-15 12:51:43.293539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:34.895 [2024-11-15 12:51:43.306296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166eb760 00:16:34.896 [2024-11-15 12:51:43.308399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.308444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.320089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ebfd0 00:16:34.896 [2024-11-15 12:51:43.321990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.322067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.333674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ec840 00:16:34.896 [2024-11-15 12:51:43.335561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.335628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.347207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ed0b0 00:16:34.896 [2024-11-15 12:51:43.348973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.349019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.360690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ed920 00:16:34.896 [2024-11-15 12:51:43.362521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.362566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.374225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ee190 00:16:34.896 [2024-11-15 12:51:43.376135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.376168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.389182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166eea00 00:16:34.896 [2024-11-15 12:51:43.391153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.391185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.405038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ef270 00:16:34.896 [2024-11-15 12:51:43.407029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.407078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.419982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166efae0 00:16:34.896 [2024-11-15 12:51:43.421764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.421814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.434216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f0350 00:16:34.896 [2024-11-15 12:51:43.435956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.436004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.448380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f0bc0 00:16:34.896 [2024-11-15 12:51:43.450225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.450271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.462738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f1430 00:16:34.896 [2024-11-15 12:51:43.464446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.464493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.476930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f1ca0 00:16:34.896 [2024-11-15 12:51:43.478677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.478723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.491110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f2510 00:16:34.896 [2024-11-15 12:51:43.492819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.492866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.505157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f2d80 00:16:34.896 [2024-11-15 12:51:43.506869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.506915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.519399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f35f0 00:16:34.896 [2024-11-15 12:51:43.521069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.521117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.533596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f3e60 00:16:34.896 [2024-11-15 12:51:43.535283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.535330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.547981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f46d0 00:16:34.896 [2024-11-15 12:51:43.549636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.549718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:34.896 [2024-11-15 12:51:43.562185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f4f40 00:16:34.896 [2024-11-15 12:51:43.563814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:34.896 [2024-11-15 12:51:43.563860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:35.156 [2024-11-15 12:51:43.576414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f57b0 00:16:35.157 [2024-11-15 12:51:43.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.578122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.590098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f6020 00:16:35.157 [2024-11-15 12:51:43.591638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.591690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.603523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f6890 00:16:35.157 [2024-11-15 12:51:43.605016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.605062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.616858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f7100 00:16:35.157 [2024-11-15 12:51:43.618466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.618513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.630476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f7970 00:16:35.157 [2024-11-15 12:51:43.631971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.632016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.644070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f81e0 00:16:35.157 [2024-11-15 12:51:43.645512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.645558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.657368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f8a50 00:16:35.157 [2024-11-15 12:51:43.658898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.658943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.670928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f92c0 00:16:35.157 [2024-11-15 12:51:43.672336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.672382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.684383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166f9b30 00:16:35.157 [2024-11-15 12:51:43.685847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.685895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.697994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fa3a0 00:16:35.157 [2024-11-15 12:51:43.699423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.699470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.711545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fac10 00:16:35.157 [2024-11-15 12:51:43.712922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.712969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.725160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fb480 00:16:35.157 [2024-11-15 12:51:43.726597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.726651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.738687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fbcf0 00:16:35.157 [2024-11-15 12:51:43.740031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.740077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.752040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fc560 00:16:35.157 [2024-11-15 12:51:43.753362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.753407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.765395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fcdd0 00:16:35.157 [2024-11-15 12:51:43.766808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.766853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.778977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fd640 00:16:35.157 [2024-11-15 12:51:43.780266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.780312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.792331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fdeb0 00:16:35.157 [2024-11-15 12:51:43.793662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.793756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.805654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166fe720 00:16:35.157 [2024-11-15 12:51:43.806977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.807022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:35.157 [2024-11-15 12:51:43.819086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4a750) with pdu=0x2000166ff3c8 00:16:35.157 [2024-11-15 12:51:43.820405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:35.157 [2024-11-15 12:51:43.820453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:35.417 00:16:35.417 Latency(us) 00:16:35.417 [2024-11-15T12:51:44.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.417 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.417 nvme0n1 : 2.00 18327.18 71.59 0.00 0.00 6978.31 3306.59 26095.24 00:16:35.417 [2024-11-15T12:51:44.087Z] =================================================================================================================== 00:16:35.417 [2024-11-15T12:51:44.087Z] Total : 18327.18 71.59 0.00 0.00 6978.31 3306.59 26095.24 00:16:35.417 { 00:16:35.417 "results": [ 00:16:35.417 { 00:16:35.417 "job": "nvme0n1", 00:16:35.417 "core_mask": "0x2", 00:16:35.417 "workload": "randwrite", 00:16:35.417 "status": "finished", 00:16:35.417 "queue_depth": 128, 00:16:35.417 "io_size": 4096, 00:16:35.417 "runtime": 2.00189, 00:16:35.417 "iops": 18327.180814130646, 00:16:35.417 "mibps": 71.59055005519784, 00:16:35.417 "io_failed": 0, 00:16:35.417 "io_timeout": 0, 00:16:35.417 "avg_latency_us": 6978.313767961167, 00:16:35.417 "min_latency_us": 3306.589090909091, 00:16:35.417 "max_latency_us": 26095.243636363637 00:16:35.417 } 00:16:35.417 ], 00:16:35.417 "core_count": 1 00:16:35.417 } 00:16:35.417 12:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:35.417 12:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:35.417 12:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:35.417 12:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:35.417 | .driver_specific 00:16:35.417 | .nvme_error 00:16:35.417 | .status_code 00:16:35.417 | .command_transient_transport_error' 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79701 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79701 ']' 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79701 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79701 00:16:35.677 killing process with pid 79701 00:16:35.677 Received shutdown signal, test time was about 2.000000 seconds 00:16:35.677 00:16:35.677 Latency(us) 00:16:35.677 [2024-11-15T12:51:44.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.677 [2024-11-15T12:51:44.347Z] =================================================================================================================== 00:16:35.677 [2024-11-15T12:51:44.347Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79701' 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79701 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79701 00:16:35.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79748 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79748 /var/tmp/bperf.sock 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79748 ']' 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.677 12:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:35.937 [2024-11-15 12:51:44.353399] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:35.937 [2024-11-15 12:51:44.353507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79748 ] 00:16:35.937 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:35.937 Zero copy mechanism will not be used. 00:16:35.937 [2024-11-15 12:51:44.492641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.937 [2024-11-15 12:51:44.520824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.937 [2024-11-15 12:51:44.549755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.873 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.873 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:36.873 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:36.873 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:37.132 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:37.132 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.132 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:37.132 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.132 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:37.132 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:37.391 nvme0n1 00:16:37.391 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:37.391 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.391 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:37.391 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.391 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:37.391 12:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:37.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:37.391 Zero copy mechanism will not be used. 00:16:37.391 Running I/O for 2 seconds... 00:16:37.391 [2024-11-15 12:51:45.997489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.391 [2024-11-15 12:51:45.997607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.391 [2024-11-15 12:51:45.997665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.391 [2024-11-15 12:51:46.002390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.391 [2024-11-15 12:51:46.002486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.391 [2024-11-15 12:51:46.002509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.391 [2024-11-15 12:51:46.007200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.391 [2024-11-15 12:51:46.007316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.391 [2024-11-15 12:51:46.007339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.391 [2024-11-15 12:51:46.011926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.391 [2024-11-15 12:51:46.012043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.391 [2024-11-15 12:51:46.012064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.391 [2024-11-15 12:51:46.016545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.016660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.016681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.021065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.021182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.021203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.025526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.025657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.025731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.030281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.030398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.030419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.034891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.035004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.035025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.039403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.039539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.039559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.044014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.044127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.044147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.048478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.048586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.048607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.053000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.053130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.053150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.392 [2024-11-15 12:51:46.057897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.392 [2024-11-15 12:51:46.058069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.392 [2024-11-15 12:51:46.058103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.062960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.063084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.063107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.067720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.067834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.067869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.072252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.072387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.072407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.076933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.077035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.077056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.081644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.081824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.081847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.086483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.086600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.086621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.091032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.091148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.091168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.095624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.095774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.095794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.100176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.100312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.100332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.104806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.104922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.104943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.109377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.109490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.109510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.113931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.114072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.114106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.118537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.118660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.118680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.123102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.123217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.123236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.127587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.653 [2024-11-15 12:51:46.127711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.653 [2024-11-15 12:51:46.127731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.653 [2024-11-15 12:51:46.132159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.132274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.132294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.136703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.136816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.136835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.141102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.141215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.141235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.145564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.145716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.145738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.150002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.150140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.150160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.154536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.154667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.154687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.159127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.159235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.159256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.163641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.163758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.163778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.168098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.168212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.168231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.172687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.172812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.172833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.177153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.177277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.177297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.181572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.181741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.181763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.186205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.186301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.186321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.190761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.190877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.190898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.195270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.195387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.195409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.199956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.200069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.200089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.204432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.204547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.204567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.208979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.209093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.209114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.213441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.213563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.213584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.217992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.218104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.218125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.222407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.222544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.222564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.227045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.227155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.227175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.231553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.231696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.231717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.236172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.236286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.236305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.240689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.240802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.240822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.245150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.245265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.245285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.249746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.654 [2024-11-15 12:51:46.249869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.654 [2024-11-15 12:51:46.249890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.654 [2024-11-15 12:51:46.254222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.254358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.254378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.258695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.258833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.258853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.263212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.263342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.263362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.267754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.267857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.267876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.272370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.272511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.272531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.276899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.276977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.276997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.281391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.281506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.281527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.285994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.286164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.286184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.290532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.290672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.290693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.295132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.295261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.295281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.299736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.299846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.299866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.304244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.304365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.304384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.308796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.308910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.308930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.313241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.313355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.313375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.655 [2024-11-15 12:51:46.318249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.655 [2024-11-15 12:51:46.318360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.655 [2024-11-15 12:51:46.318396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.916 [2024-11-15 12:51:46.323315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.916 [2024-11-15 12:51:46.323452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.916 [2024-11-15 12:51:46.323472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.916 [2024-11-15 12:51:46.328278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.916 [2024-11-15 12:51:46.328392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.916 [2024-11-15 12:51:46.328412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.333179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.333335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.337768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.337891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.337913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.342271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.342405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.342425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.346881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.346982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.347017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.351411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.351552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.356084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.356196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.356215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.360617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.360727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.360747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.365122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.365259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.365279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.369763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.369899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.369920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.374392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.374490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.374511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.378969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.379098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.379118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.383510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.383646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.383666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.388050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.388180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.388200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.392658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.392793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.392813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.397229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.397344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.397364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.401785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.401883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.401905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.406255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.406369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.406389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.410863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.410982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.411017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.415396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.415509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.415529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.419991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.420107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.420127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.424455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.424597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.424624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.429016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.429133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.429154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.433748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.433855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.433876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.438264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.438404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.438423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.442800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.442901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.447302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.447426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.917 [2024-11-15 12:51:46.447447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.917 [2024-11-15 12:51:46.451865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.917 [2024-11-15 12:51:46.451957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.451977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.456386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.456516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.456535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.460995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.461110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.461132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.465547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.465733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.465755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.470103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.470207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.470227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.474793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.474935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.474955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.479318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.479458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.479477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.483857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.483955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.483990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.488292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.488411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.488431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.492872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.492968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.492987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.497363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.497461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.497480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.501978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.502123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.502142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.506524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.506818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.506840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.511328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.511427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.511447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.515811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.515910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.515930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.520263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.520385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.520405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.524802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.524949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.524970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.529189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.529284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.529304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.533753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.533887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.533908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.538252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.538512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.538533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.543033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.543154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.543174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.547472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.547597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.547644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.552089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.552188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.552208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.556637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.556737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.556756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.561084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.561182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.561202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.565552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.565720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.565742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.570159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.570414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.570434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.918 [2024-11-15 12:51:46.574867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.918 [2024-11-15 12:51:46.574967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.918 [2024-11-15 12:51:46.574987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:37.919 [2024-11-15 12:51:46.579345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:37.919 [2024-11-15 12:51:46.579437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.919 [2024-11-15 12:51:46.579458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.179 [2024-11-15 12:51:46.584436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.179 [2024-11-15 12:51:46.584554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.179 [2024-11-15 12:51:46.584575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.179 [2024-11-15 12:51:46.589088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.179 [2024-11-15 12:51:46.589217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.179 [2024-11-15 12:51:46.589237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.179 [2024-11-15 12:51:46.593954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.179 [2024-11-15 12:51:46.594093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.179 [2024-11-15 12:51:46.594113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.179 [2024-11-15 12:51:46.598729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.179 [2024-11-15 12:51:46.598855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.179 [2024-11-15 12:51:46.598875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.603287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.603387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.603407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.607879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.607979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.607999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.612403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.612521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.612541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.617336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.617419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.617439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.622300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.622547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.622634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.628523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.628632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.628671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.634715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.635001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.635024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.640607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.640697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.640730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.645240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.645319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.645339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.650457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.650716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.650749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.655257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.655357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.659770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.659870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.659891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.664345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.664445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.664466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.668956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.669054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.669074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.673460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.673579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.673599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.678136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.678368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.678389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.682949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.683049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.683069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.687492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.687628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.687676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.692114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.692213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.692233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.696714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.696833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.696853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.701144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.701241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.701261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.705636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.705771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.705793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.710156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.710236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.710256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.714675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.714802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.714821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.719086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.719196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.719216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.723556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.180 [2024-11-15 12:51:46.723678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.180 [2024-11-15 12:51:46.723700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.180 [2024-11-15 12:51:46.728167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.728287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.728306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.732704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.732785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.737174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.737294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.737314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.741697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.741861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.741884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.746266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.746364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.746384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.750788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.750876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.750895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.755160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.755279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.755298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.759766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.759843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.759863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.764340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.764436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.764457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.768911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.769002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.769037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.773347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.773474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.773493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.777946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.778092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.778112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.782494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.782612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.782647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.787108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.787228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.787247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.791677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.791758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.791778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.796106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.796206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.796226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.800625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.800725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.800745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.805108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.805226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.809528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.809692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.809728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.814174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.814270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.814290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.818703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.818828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.818848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.823177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.823276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.823296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.827660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.827779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.827799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.832108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.832228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.832247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.836667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.836768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.836789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.181 [2024-11-15 12:51:46.841094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.181 [2024-11-15 12:51:46.841199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.181 [2024-11-15 12:51:46.841220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.846295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.846582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.846605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.851396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.851486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.851506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.856262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.856365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.856385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.860792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.860898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.860918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.865257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.865355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.865375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.869851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.869940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.869964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.874573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.874871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.874893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.879481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.879581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.879604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.884094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.884194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.884214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.888511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.888642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.888679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.893057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.893150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.893170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.897548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.897742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.897764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.902160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.902249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.902269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.906655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.906786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.906805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.442 [2024-11-15 12:51:46.911498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.442 [2024-11-15 12:51:46.911598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.442 [2024-11-15 12:51:46.911635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.916427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.916507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.916527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.921551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.921736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.921760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.926869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.926977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.926998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.932141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.932239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.932259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.937279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.937379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.937400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.942351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.942626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.942667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.947536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.947669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.947704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.952514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.952630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.952668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.957464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.957564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.957585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.962211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.962459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.962481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.967098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.967380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.967632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.972372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.972679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.972893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.977511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.977826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.978059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.982792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.983046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.983213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:46.988294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.988600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.988807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.443 6617.00 IOPS, 827.12 MiB/s [2024-11-15T12:51:47.113Z] [2024-11-15 12:51:46.995382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:46.995706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:46.995914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.000718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.000976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.001208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.006125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.006383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.006531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.011256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.011478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.011502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.016159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.016260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.016281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.020948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.021049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.021070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.025650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.025780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.025802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.030390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.030490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.030510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.035312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.035410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.035431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.040026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.040125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.040145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.044593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.044747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.443 [2024-11-15 12:51:47.044768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.443 [2024-11-15 12:51:47.049381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.443 [2024-11-15 12:51:47.049725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.049748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.054497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.054599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.054635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.059169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.059269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.059289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.063908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.064024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.064044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.068725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.068824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.068845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.073327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.073407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.073428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.078305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.078434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.078454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.083006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.083104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.083124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.087583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.087729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.087750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.092536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.092817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.092839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.097521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.097710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.097732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.102260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.102381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.102401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.444 [2024-11-15 12:51:47.107038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.444 [2024-11-15 12:51:47.107157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.444 [2024-11-15 12:51:47.107177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.112033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.112123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.112143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.116967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.117056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.117076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.121452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.121571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.121591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.126263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.126361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.126381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.131046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.131153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.131174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.135754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.135852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.135873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.140281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.140401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.140421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.144901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.144998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.145018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.149401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.149479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.149500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.154110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.154209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.154229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.158774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.158874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.158894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.163389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.163472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.163493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.168140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.168262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.705 [2024-11-15 12:51:47.168282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.705 [2024-11-15 12:51:47.172636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.705 [2024-11-15 12:51:47.172756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.172776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.177094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.177193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.177213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.181571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.181728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.181750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.186254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.186488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.186509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.191103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.191201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.191221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.195594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.195757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.195777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.200164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.200262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.200281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.204689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.204786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.204806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.209087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.209184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.209204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.213616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.213748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.213770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.218212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.218461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.218482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.222920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.223018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.223038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.227341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.227460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.231895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.232003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.232038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.236447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.236544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.236563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.240985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.241074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.241093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.245437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.245536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.245556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.250078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.250175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.250195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.254656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.254981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.255004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.259487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.259607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.259670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.264097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.264211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.264231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.268635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.268733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.268753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.273020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.706 [2024-11-15 12:51:47.273117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.706 [2024-11-15 12:51:47.273137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.706 [2024-11-15 12:51:47.277432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.277532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.277552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.282086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.282182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.282202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.286665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.286796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.286817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.291068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.291165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.291185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.295552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.295689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.295725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.300169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.300267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.300287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.304659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.304734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.304753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.309043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.309141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.309161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.313488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.313586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.313605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.318168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.318265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.318285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.322697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.322832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.322852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.327106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.327205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.327224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.331518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.331684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.331705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.336109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.336228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.336247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.340549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.340688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.340709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.345135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.345232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.345252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.349554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.349735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.349756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.354135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.354384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.354405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.358963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.359062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.359083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.363343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.363451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.363470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.707 [2024-11-15 12:51:47.368150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.707 [2024-11-15 12:51:47.368275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.707 [2024-11-15 12:51:47.368295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.373453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.373547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.373568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.378188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.378466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.378502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.383296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.383418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.383437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.387955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.388049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.388068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.392473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.392592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.392639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.397047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.397144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.397163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.401490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.401589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.401609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.406205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.406462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.406484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.411010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.411108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.411127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.415548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.415676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.415697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.420045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.420145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.420167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.424504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.424601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.424649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.429046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.429165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.429185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.433375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.433470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.433490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.438003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.438133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.438153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.442533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.442820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.442842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.447381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.447637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.447811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.452101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.452349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.452493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.456767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.457019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.457159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.461539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.461865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.462141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.466464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.466766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.466980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.471233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.471464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.471667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.475867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.476139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.969 [2024-11-15 12:51:47.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.969 [2024-11-15 12:51:47.480518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.969 [2024-11-15 12:51:47.480812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.481070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.485112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.485402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.485575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.489928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.490186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.490215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.494673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.494942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.494965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.499541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.499653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.499673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.503984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.504082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.504102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.508427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.508526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.508545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.513055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.513153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.513172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.517560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.517731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.517752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.522012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.522133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.522153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.526527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.526792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.526813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.531319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.531439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.531459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.535916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.536013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.536032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.540327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.540423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.540443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.544830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.544930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.544950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.549188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.549286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.549306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.553662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.553811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.553832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.558324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.558555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.558576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.563099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.563226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.563246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.567759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.567850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.567870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.572218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.572316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.572336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.576745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.576835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.576854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.581161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.581260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.581279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.585730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.585835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.585857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.590286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.590550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.590572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.595185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.595306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.595326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.599696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.599800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.599819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.970 [2024-11-15 12:51:47.604170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.970 [2024-11-15 12:51:47.604267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.970 [2024-11-15 12:51:47.604286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.971 [2024-11-15 12:51:47.608708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.971 [2024-11-15 12:51:47.608806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.971 [2024-11-15 12:51:47.608825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.971 [2024-11-15 12:51:47.613097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.971 [2024-11-15 12:51:47.613196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.971 [2024-11-15 12:51:47.613216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:38.971 [2024-11-15 12:51:47.617574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.971 [2024-11-15 12:51:47.617725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.971 [2024-11-15 12:51:47.617747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:38.971 [2024-11-15 12:51:47.622153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.971 [2024-11-15 12:51:47.622403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.971 [2024-11-15 12:51:47.622425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:38.971 [2024-11-15 12:51:47.627058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.971 [2024-11-15 12:51:47.627155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.971 [2024-11-15 12:51:47.627175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:38.971 [2024-11-15 12:51:47.631676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:38.971 [2024-11-15 12:51:47.631814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.971 [2024-11-15 12:51:47.631851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.636801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.636899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.636919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.641652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.641874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.641896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.646578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.646842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.646864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.651412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.651717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.652104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.656209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.656459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.656616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.661005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.661241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.661406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.665794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.666055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.666216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.232 [2024-11-15 12:51:47.670559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.232 [2024-11-15 12:51:47.670828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.232 [2024-11-15 12:51:47.671059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.675314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.675568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.675765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.680065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.680314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.680476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.684842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.685100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.685254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.689586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.689889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.689914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.694349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.694601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.694624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.699520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.699826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.700189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.704438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.704733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.705018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.709130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.709380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.709520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.714071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.714315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.714459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.718855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.719093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.719237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.723574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.723823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.723982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.728247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.728498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.728746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.732922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.733196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.733369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.737751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.738043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.738355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.742501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.742908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.747207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.747478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.747679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.752061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.752300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.752454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.756736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.756986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.757140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.761272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.761545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.761754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.766173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.766451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.766616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.770800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.771040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.771200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.775502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.775785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.775929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.780261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.780505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.780814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.785012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.785260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.785404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.233 [2024-11-15 12:51:47.789789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.233 [2024-11-15 12:51:47.790032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.233 [2024-11-15 12:51:47.790316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.794605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.794909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.795184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.799359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.799637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.799835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.804030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.804281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.804491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.808789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.809015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.809314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.813586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.813888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.818430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.818666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.818689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.823071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.823170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.823190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.827546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.827691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.827712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.832097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.832216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.832236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.836608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.836736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.836756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.841097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.841221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.841241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.845509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.845599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.845648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.850216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.850484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.855004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.855105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.855125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.859459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.859584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.859604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.864071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.864170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.864190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.868603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.868712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.868732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.873177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.873296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.873316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.877651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.877786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.877806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.882270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.882390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.882410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.886832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.886929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.886949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.891280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.891378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.891398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.234 [2024-11-15 12:51:47.896201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.234 [2024-11-15 12:51:47.896299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.234 [2024-11-15 12:51:47.896319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.901316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.901547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.901568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.906446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.906548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.906584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.911203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.911305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.911325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.915892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.916008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.916028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.920607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.920721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.920741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.925207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.925442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.925463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.930240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.930341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.930360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.934950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.935066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.935085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.939567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.939697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.939718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.944290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.944390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.944410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.948952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.949052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.949072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.953527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.953841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.494 [2024-11-15 12:51:47.953863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.494 [2024-11-15 12:51:47.958503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.494 [2024-11-15 12:51:47.958602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.958623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.495 [2024-11-15 12:51:47.963022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.495 [2024-11-15 12:51:47.963119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.963139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.495 [2024-11-15 12:51:47.967543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.495 [2024-11-15 12:51:47.967664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.967686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.495 [2024-11-15 12:51:47.972060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.495 [2024-11-15 12:51:47.972181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.972201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.495 [2024-11-15 12:51:47.976546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.495 [2024-11-15 12:51:47.976687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.976708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:39.495 [2024-11-15 12:51:47.981151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.495 [2024-11-15 12:51:47.981382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.981402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:39.495 [2024-11-15 12:51:47.986018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.495 [2024-11-15 12:51:47.986133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.986153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:39.495 6618.50 IOPS, 827.31 MiB/s [2024-11-15T12:51:48.165Z] [2024-11-15 12:51:47.991935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4aa90) with pdu=0x2000166ff3c8 00:16:39.495 [2024-11-15 12:51:47.992080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.495 [2024-11-15 12:51:47.992100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:39.495 00:16:39.495 Latency(us) 00:16:39.495 [2024-11-15T12:51:48.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.495 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:39.495 nvme0n1 : 2.00 6615.09 826.89 0.00 0.00 2413.27 1779.90 9711.24 00:16:39.495 [2024-11-15T12:51:48.165Z] =================================================================================================================== 00:16:39.495 [2024-11-15T12:51:48.165Z] Total : 6615.09 826.89 0.00 0.00 2413.27 1779.90 9711.24 00:16:39.495 { 00:16:39.495 "results": [ 00:16:39.495 { 00:16:39.495 "job": "nvme0n1", 00:16:39.495 "core_mask": "0x2", 00:16:39.495 "workload": "randwrite", 00:16:39.495 "status": "finished", 00:16:39.495 "queue_depth": 16, 00:16:39.495 "io_size": 131072, 00:16:39.495 "runtime": 2.003602, 00:16:39.495 "iops": 6615.08622970031, 00:16:39.495 "mibps": 826.8857787125387, 00:16:39.495 "io_failed": 0, 00:16:39.495 "io_timeout": 0, 00:16:39.495 "avg_latency_us": 2413.2664202916444, 00:16:39.495 "min_latency_us": 1779.898181818182, 00:16:39.495 "max_latency_us": 9711.243636363637 00:16:39.495 } 00:16:39.495 ], 00:16:39.495 "core_count": 1 00:16:39.495 } 00:16:39.495 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:39.495 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:39.495 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:39.495 | .driver_specific 00:16:39.495 | .nvme_error 00:16:39.495 | .status_code 00:16:39.495 | .command_transient_transport_error' 00:16:39.495 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 428 > 0 )) 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79748 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79748 ']' 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79748 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79748 00:16:39.754 killing process with pid 79748 00:16:39.754 Received shutdown signal, test time was about 2.000000 seconds 00:16:39.754 00:16:39.754 Latency(us) 00:16:39.754 [2024-11-15T12:51:48.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.754 [2024-11-15T12:51:48.424Z] =================================================================================================================== 00:16:39.754 [2024-11-15T12:51:48.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79748' 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79748 00:16:39.754 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79748 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79576 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79576 ']' 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79576 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79576 00:16:40.013 killing process with pid 79576 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79576' 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79576 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79576 00:16:40.013 00:16:40.013 real 0m15.206s 00:16:40.013 user 0m29.844s 00:16:40.013 sys 0m4.284s 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:40.013 ************************************ 00:16:40.013 END TEST nvmf_digest_error 00:16:40.013 ************************************ 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:40.013 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:40.272 rmmod nvme_tcp 00:16:40.272 rmmod nvme_fabrics 00:16:40.272 rmmod nvme_keyring 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:16:40.272 Process with pid 79576 is not found 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79576 ']' 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79576 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79576 ']' 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79576 00:16:40.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79576) - No such process 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79576 is not found' 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.272 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.531 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:40.531 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.531 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.531 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.531 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:16:40.531 00:16:40.531 real 0m31.501s 00:16:40.531 user 1m0.023s 00:16:40.531 sys 0m8.978s 00:16:40.531 ************************************ 00:16:40.531 END TEST nvmf_digest 00:16:40.531 ************************************ 00:16:40.531 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.531 12:51:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.531 ************************************ 00:16:40.531 START TEST nvmf_host_multipath 00:16:40.531 ************************************ 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:40.531 * Looking for test storage... 00:16:40.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:16:40.531 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:40.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.792 --rc genhtml_branch_coverage=1 00:16:40.792 --rc genhtml_function_coverage=1 00:16:40.792 --rc genhtml_legend=1 00:16:40.792 --rc geninfo_all_blocks=1 00:16:40.792 --rc geninfo_unexecuted_blocks=1 00:16:40.792 00:16:40.792 ' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:40.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.792 --rc genhtml_branch_coverage=1 00:16:40.792 --rc genhtml_function_coverage=1 00:16:40.792 --rc genhtml_legend=1 00:16:40.792 --rc geninfo_all_blocks=1 00:16:40.792 --rc geninfo_unexecuted_blocks=1 00:16:40.792 00:16:40.792 ' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:40.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.792 --rc genhtml_branch_coverage=1 00:16:40.792 --rc genhtml_function_coverage=1 00:16:40.792 --rc genhtml_legend=1 00:16:40.792 --rc geninfo_all_blocks=1 00:16:40.792 --rc geninfo_unexecuted_blocks=1 00:16:40.792 00:16:40.792 ' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:40.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.792 --rc genhtml_branch_coverage=1 00:16:40.792 --rc genhtml_function_coverage=1 00:16:40.792 --rc genhtml_legend=1 00:16:40.792 --rc geninfo_all_blocks=1 00:16:40.792 --rc geninfo_unexecuted_blocks=1 00:16:40.792 00:16:40.792 ' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.792 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:40.793 Cannot find device "nvmf_init_br" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:40.793 Cannot find device "nvmf_init_br2" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:40.793 Cannot find device "nvmf_tgt_br" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.793 Cannot find device "nvmf_tgt_br2" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:40.793 Cannot find device "nvmf_init_br" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:40.793 Cannot find device "nvmf_init_br2" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:40.793 Cannot find device "nvmf_tgt_br" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:40.793 Cannot find device "nvmf_tgt_br2" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:40.793 Cannot find device "nvmf_br" 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:16:40.793 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:40.793 Cannot find device "nvmf_init_if" 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:40.794 Cannot find device "nvmf_init_if2" 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.794 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:41.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:16:41.053 00:16:41.053 --- 10.0.0.3 ping statistics --- 00:16:41.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.053 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:41.053 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:41.053 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:41.053 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:16:41.053 00:16:41.053 --- 10.0.0.4 ping statistics --- 00:16:41.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.053 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:41.054 00:16:41.054 --- 10.0.0.1 ping statistics --- 00:16:41.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.054 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:41.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:41.054 00:16:41.054 --- 10.0.0.2 ping statistics --- 00:16:41.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.054 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80067 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80067 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80067 ']' 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.054 12:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:41.313 [2024-11-15 12:51:49.733541] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:16:41.313 [2024-11-15 12:51:49.734397] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.313 [2024-11-15 12:51:49.882408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:41.313 [2024-11-15 12:51:49.910307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.313 [2024-11-15 12:51:49.910355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.313 [2024-11-15 12:51:49.910364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.313 [2024-11-15 12:51:49.910370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.313 [2024-11-15 12:51:49.910376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.313 [2024-11-15 12:51:49.914640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.313 [2024-11-15 12:51:49.914664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.313 [2024-11-15 12:51:49.943086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80067 00:16:41.572 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:41.831 [2024-11-15 12:51:50.333348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.831 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:42.090 Malloc0 00:16:42.090 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:42.349 12:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.608 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:42.867 [2024-11-15 12:51:51.438521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:42.867 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:43.126 [2024-11-15 12:51:51.710654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:43.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80114 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80114 /var/tmp/bdevperf.sock 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80114 ']' 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.126 12:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:44.061 12:51:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.061 12:51:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:44.061 12:51:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:44.627 12:51:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:44.886 Nvme0n1 00:16:44.886 12:51:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:45.144 Nvme0n1 00:16:45.144 12:51:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:16:45.144 12:51:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:46.128 12:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:46.128 12:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:46.426 12:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:46.686 12:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:46.686 12:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:46.686 12:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80160 00:16:46.686 12:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.251 Attaching 4 probes... 00:16:53.251 @path[10.0.0.3, 4421]: 20006 00:16:53.251 @path[10.0.0.3, 4421]: 20493 00:16:53.251 @path[10.0.0.3, 4421]: 20579 00:16:53.251 @path[10.0.0.3, 4421]: 20453 00:16:53.251 @path[10.0.0.3, 4421]: 20428 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80160 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:53.251 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:53.510 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:53.510 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80275 00:16:53.510 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:53.510 12:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:00.076 12:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:00.076 12:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.076 Attaching 4 probes... 00:17:00.076 @path[10.0.0.3, 4420]: 20396 00:17:00.076 @path[10.0.0.3, 4420]: 20584 00:17:00.076 @path[10.0.0.3, 4420]: 20464 00:17:00.076 @path[10.0.0.3, 4420]: 20601 00:17:00.076 @path[10.0.0.3, 4420]: 20693 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80275 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80393 00:17:00.076 12:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:06.642 12:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:06.642 12:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:06.642 Attaching 4 probes... 00:17:06.642 @path[10.0.0.3, 4421]: 14546 00:17:06.642 @path[10.0.0.3, 4421]: 20258 00:17:06.642 @path[10.0.0.3, 4421]: 20099 00:17:06.642 @path[10.0.0.3, 4421]: 20092 00:17:06.642 @path[10.0.0.3, 4421]: 20079 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80393 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:06.642 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:06.901 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:07.160 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:07.160 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80505 00:17:07.160 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:07.160 12:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:13.725 Attaching 4 probes... 00:17:13.725 00:17:13.725 00:17:13.725 00:17:13.725 00:17:13.725 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80505 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:13.725 12:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:13.725 12:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:13.984 12:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:13.984 12:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80623 00:17:13.984 12:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:13.984 12:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:20.549 Attaching 4 probes... 00:17:20.549 @path[10.0.0.3, 4421]: 19539 00:17:20.549 @path[10.0.0.3, 4421]: 20034 00:17:20.549 @path[10.0.0.3, 4421]: 20048 00:17:20.549 @path[10.0.0.3, 4421]: 19912 00:17:20.549 @path[10.0.0.3, 4421]: 20000 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80623 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:20.549 12:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:20.549 12:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:21.485 12:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:21.485 12:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80747 00:17:21.485 12:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:21.485 12:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.062 Attaching 4 probes... 00:17:28.062 @path[10.0.0.3, 4420]: 19547 00:17:28.062 @path[10.0.0.3, 4420]: 19856 00:17:28.062 @path[10.0.0.3, 4420]: 19808 00:17:28.062 @path[10.0.0.3, 4420]: 19871 00:17:28.062 @path[10.0.0.3, 4420]: 19817 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80747 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:28.062 [2024-11-15 12:52:36.592130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:28.062 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:28.321 12:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:17:34.889 12:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:34.889 12:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80921 00:17:34.889 12:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:34.889 12:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:41.465 12:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:41.465 12:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:41.465 Attaching 4 probes... 00:17:41.465 @path[10.0.0.3, 4421]: 19358 00:17:41.465 @path[10.0.0.3, 4421]: 19818 00:17:41.465 @path[10.0.0.3, 4421]: 19730 00:17:41.465 @path[10.0.0.3, 4421]: 19893 00:17:41.465 @path[10.0.0.3, 4421]: 19865 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80921 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80114 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80114 ']' 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80114 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:41.465 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80114 00:17:41.466 killing process with pid 80114 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80114' 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80114 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80114 00:17:41.466 { 00:17:41.466 "results": [ 00:17:41.466 { 00:17:41.466 "job": "Nvme0n1", 00:17:41.466 "core_mask": "0x4", 00:17:41.466 "workload": "verify", 00:17:41.466 "status": "terminated", 00:17:41.466 "verify_range": { 00:17:41.466 "start": 0, 00:17:41.466 "length": 16384 00:17:41.466 }, 00:17:41.466 "queue_depth": 128, 00:17:41.466 "io_size": 4096, 00:17:41.466 "runtime": 55.40464, 00:17:41.466 "iops": 8500.732068649846, 00:17:41.466 "mibps": 33.20598464316346, 00:17:41.466 "io_failed": 0, 00:17:41.466 "io_timeout": 0, 00:17:41.466 "avg_latency_us": 15028.370967336963, 00:17:41.466 "min_latency_us": 983.04, 00:17:41.466 "max_latency_us": 7015926.69090909 00:17:41.466 } 00:17:41.466 ], 00:17:41.466 "core_count": 1 00:17:41.466 } 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80114 00:17:41.466 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:41.466 [2024-11-15 12:51:51.774352] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:17:41.466 [2024-11-15 12:51:51.774444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80114 ] 00:17:41.466 [2024-11-15 12:51:51.922507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.466 [2024-11-15 12:51:51.961929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.466 [2024-11-15 12:51:51.996011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:41.466 Running I/O for 90 seconds... 00:17:41.466 7956.00 IOPS, 31.08 MiB/s [2024-11-15T12:52:50.136Z] 8948.00 IOPS, 34.95 MiB/s [2024-11-15T12:52:50.136Z] 9376.00 IOPS, 36.62 MiB/s [2024-11-15T12:52:50.136Z] 9594.00 IOPS, 37.48 MiB/s [2024-11-15T12:52:50.136Z] 9729.00 IOPS, 38.00 MiB/s [2024-11-15T12:52:50.136Z] 9816.83 IOPS, 38.35 MiB/s [2024-11-15T12:52:50.136Z] 9872.00 IOPS, 38.56 MiB/s [2024-11-15T12:52:50.136Z] 9886.00 IOPS, 38.62 MiB/s [2024-11-15T12:52:50.136Z] [2024-11-15 12:52:01.905174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.905761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.905794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.905844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.905878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.905914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.905949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.905969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.905983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.906032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.466 [2024-11-15 12:52:01.906078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.906125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.906162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.906195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.906252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.906286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.906319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:41.466 [2024-11-15 12:52:01.906338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.466 [2024-11-15 12:52:01.906352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.906770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.906803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.906835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.906869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.906902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.906934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.906967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.906993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.907008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.907040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.467 [2024-11-15 12:52:01.907687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:41.467 [2024-11-15 12:52:01.907707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.467 [2024-11-15 12:52:01.907721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.907755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.907789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.907822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.907864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.907899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.907933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.907966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.907985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.468 [2024-11-15 12:52:01.908630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.908979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.908999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.909013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.909047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.909060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.909080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.468 [2024-11-15 12:52:01.909093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:41.468 [2024-11-15 12:52:01.909112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.909144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.909183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.909216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.909249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.909281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.909313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.909348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.909361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.910834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.910867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.910895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.910911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.910931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.910945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.910965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.469 [2024-11-15 12:52:01.910979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:01.911377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:01.911391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:41.469 9872.67 IOPS, 38.57 MiB/s [2024-11-15T12:52:50.139Z] 9915.00 IOPS, 38.73 MiB/s [2024-11-15T12:52:50.139Z] 9947.00 IOPS, 38.86 MiB/s [2024-11-15T12:52:50.139Z] 9976.75 IOPS, 38.97 MiB/s [2024-11-15T12:52:50.139Z] 10002.54 IOPS, 39.07 MiB/s [2024-11-15T12:52:50.139Z] 10026.93 IOPS, 39.17 MiB/s [2024-11-15T12:52:50.139Z] [2024-11-15 12:52:08.467431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.467927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.467972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.468006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.468035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.468055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.468069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.468088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.468101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.468121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.468135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.468185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.469 [2024-11-15 12:52:08.468198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:41.469 [2024-11-15 12:52:08.468227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.468534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.468959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.468995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.470 [2024-11-15 12:52:08.469348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.469381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.469415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.469448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.469483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.469517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:41.470 [2024-11-15 12:52:08.469537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.470 [2024-11-15 12:52:08.469550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.469585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.469634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.469695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.469769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.469810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.469848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.469886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.469924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.469964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.469985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.471 [2024-11-15 12:52:08.470513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.470965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.470980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.471003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.471018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.471040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.471056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.471078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.471094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.471116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.471131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.471154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.471169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.471206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.471221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:41.471 [2024-11-15 12:52:08.471242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.471 [2024-11-15 12:52:08.471257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.471294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.471337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.471400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.471434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.471471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.471517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.471943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.471994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.472 [2024-11-15 12:52:08.472009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.472537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.472551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:41.472 [2024-11-15 12:52:08.473320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.472 [2024-11-15 12:52:08.473348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:08.473780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:08.473800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:41.473 9811.27 IOPS, 38.33 MiB/s [2024-11-15T12:52:50.143Z] 9391.50 IOPS, 36.69 MiB/s [2024-11-15T12:52:50.143Z] 9428.24 IOPS, 36.83 MiB/s [2024-11-15T12:52:50.143Z] 9466.22 IOPS, 36.98 MiB/s [2024-11-15T12:52:50.143Z] 9502.32 IOPS, 37.12 MiB/s [2024-11-15T12:52:50.143Z] 9529.20 IOPS, 37.22 MiB/s [2024-11-15T12:52:50.143Z] 9552.00 IOPS, 37.31 MiB/s [2024-11-15T12:52:50.143Z] [2024-11-15 12:52:15.596373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.596974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.596986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.597031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.473 [2024-11-15 12:52:15.597290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.597345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.597379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:41.473 [2024-11-15 12:52:15.597398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.473 [2024-11-15 12:52:15.597410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.474 [2024-11-15 12:52:15.597923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.597957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.597978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.597991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:41.474 [2024-11-15 12:52:15.598669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.474 [2024-11-15 12:52:15.598695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.598729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.598762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.598812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.598846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.598877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.598909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.598940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.598979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.598999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.475 [2024-11-15 12:52:15.599452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:41.475 [2024-11-15 12:52:15.599650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.475 [2024-11-15 12:52:15.599664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.599696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.599728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.599759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.599791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.599832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.599864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.599896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.599934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.599966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.599985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.599998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.600632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.600656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.601331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.476 [2024-11-15 12:52:15.601358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.601390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.601420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.601447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.601460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.601486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.601499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.601525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.476 [2024-11-15 12:52:15.601538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:41.476 [2024-11-15 12:52:15.601564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:15.601577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:15.601602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:15.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:15.601657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:15.601726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:15.601773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:15.601792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:41.477 9477.09 IOPS, 37.02 MiB/s [2024-11-15T12:52:50.147Z] 9065.04 IOPS, 35.41 MiB/s [2024-11-15T12:52:50.147Z] 8687.33 IOPS, 33.93 MiB/s [2024-11-15T12:52:50.147Z] 8339.84 IOPS, 32.58 MiB/s [2024-11-15T12:52:50.147Z] 8019.08 IOPS, 31.32 MiB/s [2024-11-15T12:52:50.147Z] 7722.07 IOPS, 30.16 MiB/s [2024-11-15T12:52:50.147Z] 7446.29 IOPS, 29.09 MiB/s [2024-11-15T12:52:50.147Z] 7247.14 IOPS, 28.31 MiB/s [2024-11-15T12:52:50.147Z] 7333.03 IOPS, 28.64 MiB/s [2024-11-15T12:52:50.147Z] 7419.58 IOPS, 28.98 MiB/s [2024-11-15T12:52:50.147Z] 7503.72 IOPS, 29.31 MiB/s [2024-11-15T12:52:50.147Z] 7576.94 IOPS, 29.60 MiB/s [2024-11-15T12:52:50.147Z] 7647.03 IOPS, 29.87 MiB/s [2024-11-15T12:52:50.147Z] 7710.14 IOPS, 30.12 MiB/s [2024-11-15T12:52:50.147Z] [2024-11-15 12:52:29.042319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.477 [2024-11-15 12:52:29.042691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.042978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.042998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.477 [2024-11-15 12:52:29.043346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:41.477 [2024-11-15 12:52:29.043365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.043759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.043982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.043996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.044007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.044032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.044063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.044088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.044112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.044136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.478 [2024-11-15 12:52:29.044161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.478 [2024-11-15 12:52:29.044348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.478 [2024-11-15 12:52:29.044364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.479 [2024-11-15 12:52:29.044791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.044816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.044841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.044866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.044890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.044915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.044958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.044984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.044997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.479 [2024-11-15 12:52:29.045283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.479 [2024-11-15 12:52:29.045295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.480 [2024-11-15 12:52:29.045920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.045974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.045988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.046001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.046042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.046083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.046108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.480 [2024-11-15 12:52:29.046138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8e290 is same with the state(6) to be set 00:17:41.480 [2024-11-15 12:52:29.046166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:41.480 [2024-11-15 12:52:29.046179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:41.480 [2024-11-15 12:52:29.046189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81712 len:8 PRP1 0x0 PRP2 0x0 00:17:41.480 [2024-11-15 12:52:29.046201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.480 [2024-11-15 12:52:29.046353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.480 [2024-11-15 12:52:29.046380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.480 [2024-11-15 12:52:29.046393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.480 [2024-11-15 12:52:29.046404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.481 [2024-11-15 12:52:29.046416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.481 [2024-11-15 12:52:29.046428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.481 [2024-11-15 12:52:29.046441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.481 [2024-11-15 12:52:29.046453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.481 [2024-11-15 12:52:29.046472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafae50 is same with the state(6) to be set 00:17:41.481 [2024-11-15 12:52:29.047468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:41.481 [2024-11-15 12:52:29.047506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafae50 (9): Bad file descriptor 00:17:41.481 [2024-11-15 12:52:29.047855] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:41.481 [2024-11-15 12:52:29.047887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xafae50 with addr=10.0.0.3, port=4421 00:17:41.481 [2024-11-15 12:52:29.047904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafae50 is same with the state(6) to be set 00:17:41.481 [2024-11-15 12:52:29.047933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafae50 (9): Bad file descriptor 00:17:41.481 [2024-11-15 12:52:29.047962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:41.481 [2024-11-15 12:52:29.047977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:41.481 [2024-11-15 12:52:29.047990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:41.481 [2024-11-15 12:52:29.048002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:41.481 [2024-11-15 12:52:29.048027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:41.481 7768.00 IOPS, 30.34 MiB/s [2024-11-15T12:52:50.151Z] 7813.84 IOPS, 30.52 MiB/s [2024-11-15T12:52:50.151Z] 7870.11 IOPS, 30.74 MiB/s [2024-11-15T12:52:50.151Z] 7924.72 IOPS, 30.96 MiB/s [2024-11-15T12:52:50.151Z] 7974.40 IOPS, 31.15 MiB/s [2024-11-15T12:52:50.151Z] 8022.44 IOPS, 31.34 MiB/s [2024-11-15T12:52:50.151Z] 8066.86 IOPS, 31.51 MiB/s [2024-11-15T12:52:50.151Z] 8104.74 IOPS, 31.66 MiB/s [2024-11-15T12:52:50.151Z] 8144.55 IOPS, 31.81 MiB/s [2024-11-15T12:52:50.151Z] 8185.42 IOPS, 31.97 MiB/s [2024-11-15T12:52:50.151Z] [2024-11-15 12:52:39.105933] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:41.481 8223.39 IOPS, 32.12 MiB/s [2024-11-15T12:52:50.151Z] 8259.66 IOPS, 32.26 MiB/s [2024-11-15T12:52:50.151Z] 8295.25 IOPS, 32.40 MiB/s [2024-11-15T12:52:50.151Z] 8329.71 IOPS, 32.54 MiB/s [2024-11-15T12:52:50.151Z] 8353.20 IOPS, 32.63 MiB/s [2024-11-15T12:52:50.151Z] 8384.55 IOPS, 32.75 MiB/s [2024-11-15T12:52:50.151Z] 8413.92 IOPS, 32.87 MiB/s [2024-11-15T12:52:50.151Z] 8442.49 IOPS, 32.98 MiB/s [2024-11-15T12:52:50.151Z] 8469.11 IOPS, 33.08 MiB/s [2024-11-15T12:52:50.151Z] 8494.76 IOPS, 33.18 MiB/s [2024-11-15T12:52:50.151Z] Received shutdown signal, test time was about 55.405447 seconds 00:17:41.481 00:17:41.481 Latency(us) 00:17:41.481 [2024-11-15T12:52:50.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.481 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.481 Verification LBA range: start 0x0 length 0x4000 00:17:41.481 Nvme0n1 : 55.40 8500.73 33.21 0.00 0.00 15028.37 983.04 7015926.69 00:17:41.481 [2024-11-15T12:52:50.151Z] =================================================================================================================== 00:17:41.481 [2024-11-15T12:52:50.151Z] Total : 8500.73 33.21 0.00 0.00 15028.37 983.04 7015926.69 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.481 rmmod nvme_tcp 00:17:41.481 rmmod nvme_fabrics 00:17:41.481 rmmod nvme_keyring 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80067 ']' 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80067 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80067 ']' 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80067 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80067 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.481 killing process with pid 80067 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80067' 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80067 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80067 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:41.481 12:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:17:41.481 00:17:41.481 real 1m1.072s 00:17:41.481 user 2m49.207s 00:17:41.481 sys 0m18.325s 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.481 ************************************ 00:17:41.481 END TEST nvmf_host_multipath 00:17:41.481 ************************************ 00:17:41.481 12:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.742 ************************************ 00:17:41.742 START TEST nvmf_timeout 00:17:41.742 ************************************ 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:41.742 * Looking for test storage... 00:17:41.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.742 --rc genhtml_branch_coverage=1 00:17:41.742 --rc genhtml_function_coverage=1 00:17:41.742 --rc genhtml_legend=1 00:17:41.742 --rc geninfo_all_blocks=1 00:17:41.742 --rc geninfo_unexecuted_blocks=1 00:17:41.742 00:17:41.742 ' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.742 --rc genhtml_branch_coverage=1 00:17:41.742 --rc genhtml_function_coverage=1 00:17:41.742 --rc genhtml_legend=1 00:17:41.742 --rc geninfo_all_blocks=1 00:17:41.742 --rc geninfo_unexecuted_blocks=1 00:17:41.742 00:17:41.742 ' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.742 --rc genhtml_branch_coverage=1 00:17:41.742 --rc genhtml_function_coverage=1 00:17:41.742 --rc genhtml_legend=1 00:17:41.742 --rc geninfo_all_blocks=1 00:17:41.742 --rc geninfo_unexecuted_blocks=1 00:17:41.742 00:17:41.742 ' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.742 --rc genhtml_branch_coverage=1 00:17:41.742 --rc genhtml_function_coverage=1 00:17:41.742 --rc genhtml_legend=1 00:17:41.742 --rc geninfo_all_blocks=1 00:17:41.742 --rc geninfo_unexecuted_blocks=1 00:17:41.742 00:17:41.742 ' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.742 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:41.743 Cannot find device "nvmf_init_br" 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:17:41.743 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:42.002 Cannot find device "nvmf_init_br2" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:42.002 Cannot find device "nvmf_tgt_br" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.002 Cannot find device "nvmf_tgt_br2" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:42.002 Cannot find device "nvmf_init_br" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:42.002 Cannot find device "nvmf_init_br2" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:42.002 Cannot find device "nvmf_tgt_br" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:42.002 Cannot find device "nvmf_tgt_br2" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:42.002 Cannot find device "nvmf_br" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:42.002 Cannot find device "nvmf_init_if" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:42.002 Cannot find device "nvmf_init_if2" 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:42.002 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:42.003 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:42.003 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:42.003 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.003 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.003 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:42.261 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:42.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:42.261 00:17:42.261 --- 10.0.0.3 ping statistics --- 00:17:42.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.261 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:42.262 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:42.262 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:42.262 00:17:42.262 --- 10.0.0.4 ping statistics --- 00:17:42.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.262 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:42.262 00:17:42.262 --- 10.0.0.1 ping statistics --- 00:17:42.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.262 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:42.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:42.262 00:17:42.262 --- 10.0.0.2 ping statistics --- 00:17:42.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.262 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81290 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81290 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81290 ']' 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.262 12:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:42.262 [2024-11-15 12:52:50.840118] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:17:42.262 [2024-11-15 12:52:50.840213] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.520 [2024-11-15 12:52:50.984024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:42.520 [2024-11-15 12:52:51.014961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.520 [2024-11-15 12:52:51.015039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.520 [2024-11-15 12:52:51.015051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.521 [2024-11-15 12:52:51.015058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.521 [2024-11-15 12:52:51.015065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.521 [2024-11-15 12:52:51.015854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.521 [2024-11-15 12:52:51.015863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.521 [2024-11-15 12:52:51.044365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.521 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:42.779 [2024-11-15 12:52:51.410907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.779 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:43.037 Malloc0 00:17:43.296 12:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.555 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.817 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:44.091 [2024-11-15 12:52:52.535287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81333 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81333 /var/tmp/bdevperf.sock 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81333 ']' 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.091 12:52:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:44.091 [2024-11-15 12:52:52.610145] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:17:44.091 [2024-11-15 12:52:52.610242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81333 ] 00:17:44.091 [2024-11-15 12:52:52.752728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.376 [2024-11-15 12:52:52.788312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.376 [2024-11-15 12:52:52.817887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.950 12:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.950 12:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:44.950 12:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:45.209 12:52:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:45.775 NVMe0n1 00:17:45.775 12:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81351 00:17:45.775 12:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:45.775 12:52:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:17:45.775 Running I/O for 10 seconds... 00:17:46.711 12:52:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:46.972 7204.00 IOPS, 28.14 MiB/s [2024-11-15T12:52:55.642Z] [2024-11-15 12:52:55.437829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.437878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.437901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.437912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.437925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.437935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.437947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.437956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.437968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.437977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.437988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.973 [2024-11-15 12:52:55.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.973 [2024-11-15 12:52:55.438741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.438990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.438999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.974 [2024-11-15 12:52:55.439447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.974 [2024-11-15 12:52:55.439467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.974 [2024-11-15 12:52:55.439488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.974 [2024-11-15 12:52:55.439508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.974 [2024-11-15 12:52:55.439528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.974 [2024-11-15 12:52:55.439549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.974 [2024-11-15 12:52:55.439569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.974 [2024-11-15 12:52:55.439589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.974 [2024-11-15 12:52:55.439600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.975 [2024-11-15 12:52:55.439791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.975 [2024-11-15 12:52:55.439812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:46.975 [2024-11-15 12:52:55.439977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.439988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.439997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.975 [2024-11-15 12:52:55.440385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.975 [2024-11-15 12:52:55.440397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.976 [2024-11-15 12:52:55.440651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be280 is same with the state(6) to be set 00:17:46.976 [2024-11-15 12:52:55.440674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:46.976 [2024-11-15 12:52:55.440681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:46.976 [2024-11-15 12:52:55.440690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68176 len:8 PRP1 0x0 PRP2 0x0 00:17:46.976 [2024-11-15 12:52:55.440699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.976 [2024-11-15 12:52:55.440834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.976 [2024-11-15 12:52:55.440855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.976 [2024-11-15 12:52:55.440874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.976 [2024-11-15 12:52:55.440893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.976 [2024-11-15 12:52:55.440902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1350e50 is same with the state(6) to be set 00:17:46.976 [2024-11-15 12:52:55.441150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:46.976 [2024-11-15 12:52:55.441187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1350e50 (9): Bad file descriptor 00:17:46.976 [2024-11-15 12:52:55.441282] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.976 [2024-11-15 12:52:55.441310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1350e50 with addr=10.0.0.3, port=4420 00:17:46.976 [2024-11-15 12:52:55.441322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1350e50 is same with the state(6) to be set 00:17:46.976 [2024-11-15 12:52:55.441340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1350e50 (9): Bad file descriptor 00:17:46.976 [2024-11-15 12:52:55.441357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:46.976 [2024-11-15 12:52:55.441366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:46.976 [2024-11-15 12:52:55.441377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:46.976 [2024-11-15 12:52:55.441387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:46.976 [2024-11-15 12:52:55.441397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:46.976 12:52:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:17:48.848 4234.50 IOPS, 16.54 MiB/s [2024-11-15T12:52:57.518Z] 2823.00 IOPS, 11.03 MiB/s [2024-11-15T12:52:57.518Z] [2024-11-15 12:52:57.441604] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:48.848 [2024-11-15 12:52:57.441714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1350e50 with addr=10.0.0.3, port=4420 00:17:48.848 [2024-11-15 12:52:57.441732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1350e50 is same with the state(6) to be set 00:17:48.848 [2024-11-15 12:52:57.441755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1350e50 (9): Bad file descriptor 00:17:48.848 [2024-11-15 12:52:57.441786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:48.848 [2024-11-15 12:52:57.441797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:48.848 [2024-11-15 12:52:57.441823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:48.848 [2024-11-15 12:52:57.441833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:48.848 [2024-11-15 12:52:57.441843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:48.848 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:17:48.848 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:48.848 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:49.107 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:49.107 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:17:49.107 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:49.107 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:49.366 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:49.366 12:52:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:17:51.005 2117.25 IOPS, 8.27 MiB/s [2024-11-15T12:52:59.675Z] 1693.80 IOPS, 6.62 MiB/s [2024-11-15T12:52:59.675Z] [2024-11-15 12:52:59.442002] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:51.005 [2024-11-15 12:52:59.442099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1350e50 with addr=10.0.0.3, port=4420 00:17:51.005 [2024-11-15 12:52:59.442115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1350e50 is same with the state(6) to be set 00:17:51.005 [2024-11-15 12:52:59.442140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1350e50 (9): Bad file descriptor 00:17:51.005 [2024-11-15 12:52:59.442158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:51.005 [2024-11-15 12:52:59.442167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:51.005 [2024-11-15 12:52:59.442176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:51.005 [2024-11-15 12:52:59.442187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:51.005 [2024-11-15 12:52:59.442198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:52.879 1411.50 IOPS, 5.51 MiB/s [2024-11-15T12:53:01.549Z] 1209.86 IOPS, 4.73 MiB/s [2024-11-15T12:53:01.549Z] [2024-11-15 12:53:01.442307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:52.879 [2024-11-15 12:53:01.442372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:52.879 [2024-11-15 12:53:01.442399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:52.879 [2024-11-15 12:53:01.442408] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:17:52.879 [2024-11-15 12:53:01.442420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:53.816 1058.62 IOPS, 4.14 MiB/s 00:17:53.816 Latency(us) 00:17:53.816 [2024-11-15T12:53:02.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.816 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.816 Verification LBA range: start 0x0 length 0x4000 00:17:53.816 NVMe0n1 : 8.18 1034.93 4.04 15.64 0.00 121662.00 3768.32 7015926.69 00:17:53.816 [2024-11-15T12:53:02.486Z] =================================================================================================================== 00:17:53.816 [2024-11-15T12:53:02.486Z] Total : 1034.93 4.04 15.64 0.00 121662.00 3768.32 7015926.69 00:17:53.816 { 00:17:53.816 "results": [ 00:17:53.816 { 00:17:53.816 "job": "NVMe0n1", 00:17:53.816 "core_mask": "0x4", 00:17:53.816 "workload": "verify", 00:17:53.816 "status": "finished", 00:17:53.816 "verify_range": { 00:17:53.816 "start": 0, 00:17:53.816 "length": 16384 00:17:53.816 }, 00:17:53.816 "queue_depth": 128, 00:17:53.816 "io_size": 4096, 00:17:53.816 "runtime": 8.183132, 00:17:53.816 "iops": 1034.9338126282212, 00:17:53.816 "mibps": 4.042710205578989, 00:17:53.816 "io_failed": 128, 00:17:53.816 "io_timeout": 0, 00:17:53.816 "avg_latency_us": 121661.99624985461, 00:17:53.816 "min_latency_us": 3768.32, 00:17:53.816 "max_latency_us": 7015926.69090909 00:17:53.816 } 00:17:53.816 ], 00:17:53.816 "core_count": 1 00:17:53.816 } 00:17:54.384 12:53:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:17:54.384 12:53:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:54.384 12:53:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:54.644 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:54.644 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:17:54.644 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:54.644 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81351 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81333 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81333 ']' 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81333 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81333 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:54.903 killing process with pid 81333 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81333' 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81333 00:17:54.903 Received shutdown signal, test time was about 9.285067 seconds 00:17:54.903 00:17:54.903 Latency(us) 00:17:54.903 [2024-11-15T12:53:03.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.903 [2024-11-15T12:53:03.573Z] =================================================================================================================== 00:17:54.903 [2024-11-15T12:53:03.573Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.903 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81333 00:17:55.162 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:55.421 [2024-11-15 12:53:03.907090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81479 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81479 /var/tmp/bdevperf.sock 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81479 ']' 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.421 12:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:55.421 [2024-11-15 12:53:03.980424] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:17:55.421 [2024-11-15 12:53:03.980538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81479 ] 00:17:55.680 [2024-11-15 12:53:04.122761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.680 [2024-11-15 12:53:04.152286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.680 [2024-11-15 12:53:04.180141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.254 12:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.254 12:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:56.254 12:53:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:56.517 12:53:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:56.775 NVMe0n1 00:17:56.775 12:53:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81497 00:17:56.775 12:53:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:17:56.775 12:53:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:57.034 Running I/O for 10 seconds... 00:17:57.971 12:53:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:58.233 9832.00 IOPS, 38.41 MiB/s [2024-11-15T12:53:06.903Z] [2024-11-15 12:53:06.657062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa460d0 is same with the state(6) to be set 00:17:58.233 [2024-11-15 12:53:06.657112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa460d0 is same with the state(6) to be set 00:17:58.233 [2024-11-15 12:53:06.657123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa460d0 is same with the state(6) to be set 00:17:58.233 [2024-11-15 12:53:06.657598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.657697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.657731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.657750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.657769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.657801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.657822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.657838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.657857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.657873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.657891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.657906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.657924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.657940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.657958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.657975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.233 [2024-11-15 12:53:06.658362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.658388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.658417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.658440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.658470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.658502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.658533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.233 [2024-11-15 12:53:06.658549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.233 [2024-11-15 12:53:06.658566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.658881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.658913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.658958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.658977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.658989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.234 [2024-11-15 12:53:06.659362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.234 [2024-11-15 12:53:06.659860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.234 [2024-11-15 12:53:06.659874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.659891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.659907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.659925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.659941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.659958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.659973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.659988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.660418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.235 [2024-11-15 12:53:06.660968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.660985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.661000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.661018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.661034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.661052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.661068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.661084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.661098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.661116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.661132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.661150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.661164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.235 [2024-11-15 12:53:06.661181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.235 [2024-11-15 12:53:06.661195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163d280 is same with the state(6) to be set 00:17:58.236 [2024-11-15 12:53:06.661231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90240 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90248 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90256 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90264 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90272 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90280 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90288 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90296 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90304 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90312 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90320 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90328 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.661944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.661974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.661987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.661999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90336 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.662043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.662055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90344 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.662096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.662109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90352 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.662150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.662158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90360 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.662188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.662200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90368 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.662239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.662252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90376 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.662310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.662322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90384 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:58.236 [2024-11-15 12:53:06.662362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:58.236 [2024-11-15 12:53:06.662374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90392 len:8 PRP1 0x0 PRP2 0x0 00:17:58.236 [2024-11-15 12:53:06.662387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.236 [2024-11-15 12:53:06.662704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:58.236 [2024-11-15 12:53:06.662818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:17:58.236 [2024-11-15 12:53:06.662975] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.236 [2024-11-15 12:53:06.663016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15cfe50 with addr=10.0.0.3, port=4420 00:17:58.236 [2024-11-15 12:53:06.663036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cfe50 is same with the state(6) to be set 00:17:58.236 [2024-11-15 12:53:06.663065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:17:58.236 [2024-11-15 12:53:06.663093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:58.237 [2024-11-15 12:53:06.663109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:58.237 [2024-11-15 12:53:06.663125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:58.237 [2024-11-15 12:53:06.663141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:58.237 [2024-11-15 12:53:06.663158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:58.237 12:53:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:17:59.173 5586.00 IOPS, 21.82 MiB/s [2024-11-15T12:53:07.843Z] [2024-11-15 12:53:07.663283] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:59.173 [2024-11-15 12:53:07.663343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15cfe50 with addr=10.0.0.3, port=4420 00:17:59.173 [2024-11-15 12:53:07.663366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cfe50 is same with the state(6) to be set 00:17:59.173 [2024-11-15 12:53:07.663397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:17:59.173 [2024-11-15 12:53:07.663442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:59.173 [2024-11-15 12:53:07.663491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:59.173 [2024-11-15 12:53:07.663507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:59.173 [2024-11-15 12:53:07.663524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:59.173 [2024-11-15 12:53:07.663541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:59.173 12:53:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:59.432 [2024-11-15 12:53:07.931777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.432 12:53:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81497 00:18:00.258 3724.00 IOPS, 14.55 MiB/s [2024-11-15T12:53:08.928Z] [2024-11-15 12:53:08.674631] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:02.131 2793.00 IOPS, 10.91 MiB/s [2024-11-15T12:53:11.738Z] 4091.40 IOPS, 15.98 MiB/s [2024-11-15T12:53:12.676Z] 5213.83 IOPS, 20.37 MiB/s [2024-11-15T12:53:13.614Z] 6007.00 IOPS, 23.46 MiB/s [2024-11-15T12:53:14.551Z] 6601.12 IOPS, 25.79 MiB/s [2024-11-15T12:53:15.929Z] 7061.44 IOPS, 27.58 MiB/s [2024-11-15T12:53:15.929Z] 7441.20 IOPS, 29.07 MiB/s 00:18:07.259 Latency(us) 00:18:07.259 [2024-11-15T12:53:15.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.259 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:07.259 Verification LBA range: start 0x0 length 0x4000 00:18:07.259 NVMe0n1 : 10.01 7445.29 29.08 0.00 0.00 17156.92 2517.18 3019898.88 00:18:07.259 [2024-11-15T12:53:15.929Z] =================================================================================================================== 00:18:07.259 [2024-11-15T12:53:15.929Z] Total : 7445.29 29.08 0.00 0.00 17156.92 2517.18 3019898.88 00:18:07.259 { 00:18:07.259 "results": [ 00:18:07.259 { 00:18:07.259 "job": "NVMe0n1", 00:18:07.259 "core_mask": "0x4", 00:18:07.259 "workload": "verify", 00:18:07.259 "status": "finished", 00:18:07.259 "verify_range": { 00:18:07.259 "start": 0, 00:18:07.259 "length": 16384 00:18:07.259 }, 00:18:07.259 "queue_depth": 128, 00:18:07.259 "io_size": 4096, 00:18:07.259 "runtime": 10.009019, 00:18:07.259 "iops": 7445.285097370682, 00:18:07.259 "mibps": 29.083144911604226, 00:18:07.259 "io_failed": 0, 00:18:07.259 "io_timeout": 0, 00:18:07.259 "avg_latency_us": 17156.922875128093, 00:18:07.259 "min_latency_us": 2517.1781818181817, 00:18:07.259 "max_latency_us": 3019898.88 00:18:07.259 } 00:18:07.259 ], 00:18:07.259 "core_count": 1 00:18:07.259 } 00:18:07.259 12:53:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81606 00:18:07.259 12:53:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:07.259 12:53:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:07.259 Running I/O for 10 seconds... 00:18:08.200 12:53:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:08.200 7957.00 IOPS, 31.08 MiB/s [2024-11-15T12:53:16.870Z] [2024-11-15 12:53:16.815966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-15 12:53:16.816017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-15 12:53:16.816051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-15 12:53:16.816072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-15 12:53:16.816097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-15 12:53:16.816117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-15 12:53:16.816136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-15 12:53:16.816155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.200 [2024-11-15 12:53:16.816173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.200 [2024-11-15 12:53:16.816192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.200 [2024-11-15 12:53:16.816212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.200 [2024-11-15 12:53:16.816247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.200 [2024-11-15 12:53:16.816266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.200 [2024-11-15 12:53:16.816277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.200 [2024-11-15 12:53:16.816287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-15 12:53:16.816484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-15 12:53:16.816503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-15 12:53:16.816674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.816986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.816997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.817006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.817018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.817026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.201 [2024-11-15 12:53:16.817037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.201 [2024-11-15 12:53:16.817046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.202 [2024-11-15 12:53:16.817746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.202 [2024-11-15 12:53:16.817756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.817981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.817998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.203 [2024-11-15 12:53:16.818493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.203 [2024-11-15 12:53:16.818502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.204 [2024-11-15 12:53:16.818514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.204 [2024-11-15 12:53:16.818523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.204 [2024-11-15 12:53:16.818534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.204 [2024-11-15 12:53:16.818542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.204 [2024-11-15 12:53:16.818553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.204 [2024-11-15 12:53:16.818563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.204 [2024-11-15 12:53:16.818576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.204 [2024-11-15 12:53:16.818585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.204 [2024-11-15 12:53:16.818596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.204 [2024-11-15 12:53:16.818605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.204 [2024-11-15 12:53:16.818625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163e350 is same with the state(6) to be set 00:18:08.204 [2024-11-15 12:53:16.818639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.204 [2024-11-15 12:53:16.818648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.204 [2024-11-15 12:53:16.818656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72784 len:8 PRP1 0x0 PRP2 0x0 00:18:08.204 [2024-11-15 12:53:16.818667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.204 [2024-11-15 12:53:16.819017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:08.204 [2024-11-15 12:53:16.819116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:18:08.204 [2024-11-15 12:53:16.819244] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.204 [2024-11-15 12:53:16.819270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15cfe50 with addr=10.0.0.3, port=4420 00:18:08.204 [2024-11-15 12:53:16.819282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cfe50 is same with the state(6) to be set 00:18:08.204 [2024-11-15 12:53:16.819303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:18:08.204 [2024-11-15 12:53:16.819320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:08.204 [2024-11-15 12:53:16.819329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:08.204 [2024-11-15 12:53:16.819339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:08.204 [2024-11-15 12:53:16.819351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:08.204 [2024-11-15 12:53:16.819362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:08.204 12:53:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:09.400 4490.50 IOPS, 17.54 MiB/s [2024-11-15T12:53:18.070Z] [2024-11-15 12:53:17.819476] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:09.401 [2024-11-15 12:53:17.819551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15cfe50 with addr=10.0.0.3, port=4420 00:18:09.401 [2024-11-15 12:53:17.819566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cfe50 is same with the state(6) to be set 00:18:09.401 [2024-11-15 12:53:17.819591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:18:09.401 [2024-11-15 12:53:17.819620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:09.401 [2024-11-15 12:53:17.819631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:09.401 [2024-11-15 12:53:17.819642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:09.401 [2024-11-15 12:53:17.819652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:09.401 [2024-11-15 12:53:17.819661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:10.338 2993.67 IOPS, 11.69 MiB/s [2024-11-15T12:53:19.008Z] [2024-11-15 12:53:18.819763] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.338 [2024-11-15 12:53:18.819829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15cfe50 with addr=10.0.0.3, port=4420 00:18:10.338 [2024-11-15 12:53:18.819844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cfe50 is same with the state(6) to be set 00:18:10.338 [2024-11-15 12:53:18.819867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:18:10.338 [2024-11-15 12:53:18.819885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:10.338 [2024-11-15 12:53:18.819893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:10.338 [2024-11-15 12:53:18.819903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:10.338 [2024-11-15 12:53:18.819914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:10.338 [2024-11-15 12:53:18.819924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:11.275 2245.25 IOPS, 8.77 MiB/s [2024-11-15T12:53:19.945Z] [2024-11-15 12:53:19.823271] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.275 [2024-11-15 12:53:19.823341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15cfe50 with addr=10.0.0.3, port=4420 00:18:11.275 [2024-11-15 12:53:19.823357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cfe50 is same with the state(6) to be set 00:18:11.275 [2024-11-15 12:53:19.823629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cfe50 (9): Bad file descriptor 00:18:11.275 [2024-11-15 12:53:19.823884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:11.275 [2024-11-15 12:53:19.823897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:11.275 [2024-11-15 12:53:19.823908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:11.275 [2024-11-15 12:53:19.823919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:11.275 [2024-11-15 12:53:19.823931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:11.275 12:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:11.534 [2024-11-15 12:53:20.064957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:11.534 12:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81606 00:18:12.359 1796.20 IOPS, 7.02 MiB/s [2024-11-15T12:53:21.029Z] [2024-11-15 12:53:20.847641] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:18:14.306 2820.67 IOPS, 11.02 MiB/s [2024-11-15T12:53:23.932Z] 3744.86 IOPS, 14.63 MiB/s [2024-11-15T12:53:24.869Z] 4608.50 IOPS, 18.00 MiB/s [2024-11-15T12:53:25.806Z] 5276.89 IOPS, 20.61 MiB/s [2024-11-15T12:53:25.806Z] 5806.40 IOPS, 22.68 MiB/s 00:18:17.136 Latency(us) 00:18:17.136 [2024-11-15T12:53:25.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.136 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.136 Verification LBA range: start 0x0 length 0x4000 00:18:17.136 NVMe0n1 : 10.01 5813.63 22.71 3756.18 0.00 13348.42 707.49 3019898.88 00:18:17.136 [2024-11-15T12:53:25.806Z] =================================================================================================================== 00:18:17.136 [2024-11-15T12:53:25.806Z] Total : 5813.63 22.71 3756.18 0.00 13348.42 0.00 3019898.88 00:18:17.136 { 00:18:17.136 "results": [ 00:18:17.136 { 00:18:17.136 "job": "NVMe0n1", 00:18:17.136 "core_mask": "0x4", 00:18:17.136 "workload": "verify", 00:18:17.136 "status": "finished", 00:18:17.136 "verify_range": { 00:18:17.136 "start": 0, 00:18:17.136 "length": 16384 00:18:17.136 }, 00:18:17.136 "queue_depth": 128, 00:18:17.136 "io_size": 4096, 00:18:17.136 "runtime": 10.007512, 00:18:17.136 "iops": 5813.63279904136, 00:18:17.136 "mibps": 22.709503121255313, 00:18:17.136 "io_failed": 37590, 00:18:17.136 "io_timeout": 0, 00:18:17.136 "avg_latency_us": 13348.41712086723, 00:18:17.136 "min_latency_us": 707.4909090909091, 00:18:17.136 "max_latency_us": 3019898.88 00:18:17.136 } 00:18:17.136 ], 00:18:17.136 "core_count": 1 00:18:17.136 } 00:18:17.136 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81479 00:18:17.136 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81479 ']' 00:18:17.136 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81479 00:18:17.136 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:17.136 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.136 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81479 00:18:17.136 killing process with pid 81479 00:18:17.136 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.136 00:18:17.136 Latency(us) 00:18:17.136 [2024-11-15T12:53:25.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.136 [2024-11-15T12:53:25.807Z] =================================================================================================================== 00:18:17.137 [2024-11-15T12:53:25.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.137 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:17.137 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:17.137 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81479' 00:18:17.137 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81479 00:18:17.137 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81479 00:18:17.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81726 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81726 /var/tmp/bdevperf.sock 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81726 ']' 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.396 12:53:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:17.396 [2024-11-15 12:53:25.930782] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:18:17.396 [2024-11-15 12:53:25.930882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81726 ] 00:18:17.655 [2024-11-15 12:53:26.078594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.655 [2024-11-15 12:53:26.107862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.655 [2024-11-15 12:53:26.136721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.655 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.655 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:17.655 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81739 00:18:17.655 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:17.655 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:17.914 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:18.482 NVMe0n1 00:18:18.482 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81776 00:18:18.482 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.482 12:53:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:18.482 Running I/O for 10 seconds... 00:18:19.420 12:53:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:19.681 16764.00 IOPS, 65.48 MiB/s [2024-11-15T12:53:28.351Z] [2024-11-15 12:53:28.113097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.681 [2024-11-15 12:53:28.113158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.681 [2024-11-15 12:53:28.113172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.681 [2024-11-15 12:53:28.113181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.681 [2024-11-15 12:53:28.113190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.681 [2024-11-15 12:53:28.113198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.681 [2024-11-15 12:53:28.113207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.681 [2024-11-15 12:53:28.113215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.681 [2024-11-15 12:53:28.113223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03e50 is same with the state(6) to be set 00:18:19.681 [2024-11-15 12:53:28.113460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.681 [2024-11-15 12:53:28.113477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.681 [2024-11-15 12:53:28.113497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.681 [2024-11-15 12:53:28.113506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.681 [2024-11-15 12:53:28.113517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.681 [2024-11-15 12:53:28.113526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.113988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.113999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.682 [2024-11-15 12:53:28.114443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.682 [2024-11-15 12:53:28.114467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.114982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.114990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.683 [2024-11-15 12:53:28.115254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.683 [2024-11-15 12:53:28.115264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.115990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.115998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.116008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.116016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.116026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.116035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.116061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.116069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.116080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.684 [2024-11-15 12:53:28.116088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.684 [2024-11-15 12:53:28.116101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.685 [2024-11-15 12:53:28.116110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.685 [2024-11-15 12:53:28.116122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.685 [2024-11-15 12:53:28.116131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.685 [2024-11-15 12:53:28.116141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.685 [2024-11-15 12:53:28.116150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.685 [2024-11-15 12:53:28.116160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.685 [2024-11-15 12:53:28.116169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.685 [2024-11-15 12:53:28.116179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.685 [2024-11-15 12:53:28.116188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.685 [2024-11-15 12:53:28.116197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71140 is same with the state(6) to be set 00:18:19.685 [2024-11-15 12:53:28.116208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:19.685 [2024-11-15 12:53:28.116216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:19.685 [2024-11-15 12:53:28.116224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:18:19.685 [2024-11-15 12:53:28.116232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.685 [2024-11-15 12:53:28.116514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:19.685 [2024-11-15 12:53:28.116556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f03e50 (9): Bad file descriptor 00:18:19.685 [2024-11-15 12:53:28.116693] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.685 [2024-11-15 12:53:28.116717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f03e50 with addr=10.0.0.3, port=4420 00:18:19.685 [2024-11-15 12:53:28.116732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03e50 is same with the state(6) to be set 00:18:19.685 [2024-11-15 12:53:28.116749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f03e50 (9): Bad file descriptor 00:18:19.685 [2024-11-15 12:53:28.116765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:19.685 [2024-11-15 12:53:28.116774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:19.685 [2024-11-15 12:53:28.116784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:19.685 [2024-11-15 12:53:28.116794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:19.685 [2024-11-15 12:53:28.116803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:19.685 12:53:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81776 00:18:21.559 9398.00 IOPS, 36.71 MiB/s [2024-11-15T12:53:30.229Z] 6265.33 IOPS, 24.47 MiB/s [2024-11-15T12:53:30.229Z] [2024-11-15 12:53:30.117038] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.559 [2024-11-15 12:53:30.117139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f03e50 with addr=10.0.0.3, port=4420 00:18:21.559 [2024-11-15 12:53:30.117154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03e50 is same with the state(6) to be set 00:18:21.559 [2024-11-15 12:53:30.117178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f03e50 (9): Bad file descriptor 00:18:21.559 [2024-11-15 12:53:30.117195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:21.559 [2024-11-15 12:53:30.117206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:21.559 [2024-11-15 12:53:30.117216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:21.559 [2024-11-15 12:53:30.117226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:21.559 [2024-11-15 12:53:30.117238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:23.434 4699.00 IOPS, 18.36 MiB/s [2024-11-15T12:53:32.363Z] 3759.20 IOPS, 14.68 MiB/s [2024-11-15T12:53:32.363Z] [2024-11-15 12:53:32.117387] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.693 [2024-11-15 12:53:32.117457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f03e50 with addr=10.0.0.3, port=4420 00:18:23.693 [2024-11-15 12:53:32.117473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03e50 is same with the state(6) to be set 00:18:23.693 [2024-11-15 12:53:32.117496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f03e50 (9): Bad file descriptor 00:18:23.693 [2024-11-15 12:53:32.117514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:23.693 [2024-11-15 12:53:32.117524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:23.693 [2024-11-15 12:53:32.117534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:23.693 [2024-11-15 12:53:32.117545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:23.693 [2024-11-15 12:53:32.117556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:25.567 3132.67 IOPS, 12.24 MiB/s [2024-11-15T12:53:34.237Z] 2685.14 IOPS, 10.49 MiB/s [2024-11-15T12:53:34.237Z] [2024-11-15 12:53:34.117647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:25.567 [2024-11-15 12:53:34.117727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:25.567 [2024-11-15 12:53:34.117756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:25.567 [2024-11-15 12:53:34.117766] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:18:25.567 [2024-11-15 12:53:34.117780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:26.505 2349.50 IOPS, 9.18 MiB/s 00:18:26.505 Latency(us) 00:18:26.505 [2024-11-15T12:53:35.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.505 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:26.505 NVMe0n1 : 8.13 2312.60 9.03 15.75 0.00 54932.89 7208.96 7015926.69 00:18:26.505 [2024-11-15T12:53:35.175Z] =================================================================================================================== 00:18:26.505 [2024-11-15T12:53:35.175Z] Total : 2312.60 9.03 15.75 0.00 54932.89 7208.96 7015926.69 00:18:26.505 { 00:18:26.505 "results": [ 00:18:26.505 { 00:18:26.505 "job": "NVMe0n1", 00:18:26.505 "core_mask": "0x4", 00:18:26.505 "workload": "randread", 00:18:26.505 "status": "finished", 00:18:26.505 "queue_depth": 128, 00:18:26.505 "io_size": 4096, 00:18:26.505 "runtime": 8.127661, 00:18:26.505 "iops": 2312.596453026277, 00:18:26.505 "mibps": 9.033579894633894, 00:18:26.505 "io_failed": 128, 00:18:26.505 "io_timeout": 0, 00:18:26.505 "avg_latency_us": 54932.894688418746, 00:18:26.505 "min_latency_us": 7208.96, 00:18:26.505 "max_latency_us": 7015926.69090909 00:18:26.505 } 00:18:26.505 ], 00:18:26.505 "core_count": 1 00:18:26.505 } 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.505 Attaching 5 probes... 00:18:26.505 1406.729991: reset bdev controller NVMe0 00:18:26.505 1406.816692: reconnect bdev controller NVMe0 00:18:26.505 3407.107061: reconnect delay bdev controller NVMe0 00:18:26.505 3407.146861: reconnect bdev controller NVMe0 00:18:26.505 5407.490928: reconnect delay bdev controller NVMe0 00:18:26.505 5407.510238: reconnect bdev controller NVMe0 00:18:26.505 7407.826209: reconnect delay bdev controller NVMe0 00:18:26.505 7407.869243: reconnect bdev controller NVMe0 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81739 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81726 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81726 ']' 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81726 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.505 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81726 00:18:26.764 killing process with pid 81726 00:18:26.764 Received shutdown signal, test time was about 8.203232 seconds 00:18:26.764 00:18:26.764 Latency(us) 00:18:26.764 [2024-11-15T12:53:35.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.764 [2024-11-15T12:53:35.434Z] =================================================================================================================== 00:18:26.764 [2024-11-15T12:53:35.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.764 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.764 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.764 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81726' 00:18:26.764 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81726 00:18:26.764 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81726 00:18:26.764 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:27.024 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:27.283 rmmod nvme_tcp 00:18:27.283 rmmod nvme_fabrics 00:18:27.283 rmmod nvme_keyring 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81290 ']' 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81290 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81290 ']' 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81290 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81290 00:18:27.283 killing process with pid 81290 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81290' 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81290 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81290 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:27.283 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:27.542 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:27.542 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:27.543 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:27.543 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:27.543 12:53:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:18:27.543 00:18:27.543 real 0m45.981s 00:18:27.543 user 2m15.117s 00:18:27.543 sys 0m5.286s 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:27.543 ************************************ 00:18:27.543 END TEST nvmf_timeout 00:18:27.543 ************************************ 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:27.543 00:18:27.543 real 4m55.757s 00:18:27.543 user 12m56.277s 00:18:27.543 sys 1m6.138s 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.543 12:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.543 ************************************ 00:18:27.543 END TEST nvmf_host 00:18:27.543 ************************************ 00:18:27.802 12:53:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:18:27.802 12:53:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:18:27.802 00:18:27.802 real 12m8.215s 00:18:27.802 user 29m14.283s 00:18:27.802 sys 3m1.398s 00:18:27.802 12:53:36 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.802 12:53:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:27.802 ************************************ 00:18:27.802 END TEST nvmf_tcp 00:18:27.802 ************************************ 00:18:27.802 12:53:36 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:18:27.802 12:53:36 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:27.802 12:53:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:27.803 12:53:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.803 12:53:36 -- common/autotest_common.sh@10 -- # set +x 00:18:27.803 ************************************ 00:18:27.803 START TEST nvmf_dif 00:18:27.803 ************************************ 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:27.803 * Looking for test storage... 00:18:27.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.803 12:53:36 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:27.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.803 --rc genhtml_branch_coverage=1 00:18:27.803 --rc genhtml_function_coverage=1 00:18:27.803 --rc genhtml_legend=1 00:18:27.803 --rc geninfo_all_blocks=1 00:18:27.803 --rc geninfo_unexecuted_blocks=1 00:18:27.803 00:18:27.803 ' 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:27.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.803 --rc genhtml_branch_coverage=1 00:18:27.803 --rc genhtml_function_coverage=1 00:18:27.803 --rc genhtml_legend=1 00:18:27.803 --rc geninfo_all_blocks=1 00:18:27.803 --rc geninfo_unexecuted_blocks=1 00:18:27.803 00:18:27.803 ' 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:27.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.803 --rc genhtml_branch_coverage=1 00:18:27.803 --rc genhtml_function_coverage=1 00:18:27.803 --rc genhtml_legend=1 00:18:27.803 --rc geninfo_all_blocks=1 00:18:27.803 --rc geninfo_unexecuted_blocks=1 00:18:27.803 00:18:27.803 ' 00:18:27.803 12:53:36 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:27.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.803 --rc genhtml_branch_coverage=1 00:18:27.803 --rc genhtml_function_coverage=1 00:18:27.803 --rc genhtml_legend=1 00:18:27.803 --rc geninfo_all_blocks=1 00:18:27.803 --rc geninfo_unexecuted_blocks=1 00:18:27.803 00:18:27.803 ' 00:18:27.803 12:53:36 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:28.063 12:53:36 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:18:28.063 12:53:36 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.063 12:53:36 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.063 12:53:36 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.063 12:53:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.063 12:53:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.063 12:53:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.063 12:53:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:28.063 12:53:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:28.063 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:28.063 12:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:28.063 12:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:28.063 12:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:28.063 12:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:28.063 12:53:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.063 12:53:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:28.063 12:53:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:28.063 Cannot find device "nvmf_init_br" 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:28.063 Cannot find device "nvmf_init_br2" 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:28.063 Cannot find device "nvmf_tgt_br" 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@164 -- # true 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.063 Cannot find device "nvmf_tgt_br2" 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@165 -- # true 00:18:28.063 12:53:36 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:28.063 Cannot find device "nvmf_init_br" 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@166 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:28.064 Cannot find device "nvmf_init_br2" 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@167 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:28.064 Cannot find device "nvmf_tgt_br" 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@168 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:28.064 Cannot find device "nvmf_tgt_br2" 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@169 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:28.064 Cannot find device "nvmf_br" 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@170 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:28.064 Cannot find device "nvmf_init_if" 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@171 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:28.064 Cannot find device "nvmf_init_if2" 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@172 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:28.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@173 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:28.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@174 -- # true 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:28.064 12:53:36 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:28.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:28.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:18:28.323 00:18:28.323 --- 10.0.0.3 ping statistics --- 00:18:28.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.323 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:28.323 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:28.323 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:18:28.323 00:18:28.323 --- 10.0.0.4 ping statistics --- 00:18:28.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.323 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:28.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:28.323 00:18:28.323 --- 10.0.0.1 ping statistics --- 00:18:28.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.323 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:28.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:28.323 00:18:28.323 --- 10.0.0.2 ping statistics --- 00:18:28.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.323 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:18:28.323 12:53:36 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:28.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:28.582 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:28.582 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.841 12:53:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:28.841 12:53:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82277 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82277 00:18:28.841 12:53:37 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82277 ']' 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.841 12:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 [2024-11-15 12:53:37.359746] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:18:28.841 [2024-11-15 12:53:37.359836] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.101 [2024-11-15 12:53:37.513220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.101 [2024-11-15 12:53:37.551760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.101 [2024-11-15 12:53:37.551823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.101 [2024-11-15 12:53:37.551837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.101 [2024-11-15 12:53:37.551847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.101 [2024-11-15 12:53:37.551857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.101 [2024-11-15 12:53:37.552221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.101 [2024-11-15 12:53:37.588559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:18:29.101 12:53:37 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 12:53:37 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.101 12:53:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:29.101 12:53:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 [2024-11-15 12:53:37.692050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 12:53:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.101 12:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 ************************************ 00:18:29.101 START TEST fio_dif_1_default 00:18:29.101 ************************************ 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 bdev_null0 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 [2024-11-15 12:53:37.736201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:29.101 { 00:18:29.101 "params": { 00:18:29.101 "name": "Nvme$subsystem", 00:18:29.101 "trtype": "$TEST_TRANSPORT", 00:18:29.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.101 "adrfam": "ipv4", 00:18:29.101 "trsvcid": "$NVMF_PORT", 00:18:29.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.101 "hdgst": ${hdgst:-false}, 00:18:29.101 "ddgst": ${ddgst:-false} 00:18:29.101 }, 00:18:29.101 "method": "bdev_nvme_attach_controller" 00:18:29.101 } 00:18:29.101 EOF 00:18:29.101 )") 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:18:29.101 12:53:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:29.101 "params": { 00:18:29.101 "name": "Nvme0", 00:18:29.101 "trtype": "tcp", 00:18:29.101 "traddr": "10.0.0.3", 00:18:29.101 "adrfam": "ipv4", 00:18:29.101 "trsvcid": "4420", 00:18:29.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:29.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:29.101 "hdgst": false, 00:18:29.101 "ddgst": false 00:18:29.101 }, 00:18:29.101 "method": "bdev_nvme_attach_controller" 00:18:29.101 }' 00:18:29.360 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:29.360 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:29.361 12:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:29.361 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:29.361 fio-3.35 00:18:29.361 Starting 1 thread 00:18:41.567 00:18:41.567 filename0: (groupid=0, jobs=1): err= 0: pid=82336: Fri Nov 15 12:53:48 2024 00:18:41.567 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(396MiB/10001msec) 00:18:41.567 slat (nsec): min=5848, max=72628, avg=7525.07, stdev=3266.96 00:18:41.567 clat (usec): min=310, max=3000, avg=372.51, stdev=43.09 00:18:41.567 lat (usec): min=316, max=3028, avg=380.04, stdev=43.85 00:18:41.567 clat percentiles (usec): 00:18:41.567 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:18:41.567 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:18:41.567 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 445], 00:18:41.567 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 570], 99.95th=[ 586], 00:18:41.567 | 99.99th=[ 979] 00:18:41.567 bw ( KiB/s): min=38592, max=41760, per=99.99%, avg=40506.95, stdev=756.60, samples=19 00:18:41.567 iops : min= 9648, max=10440, avg=10126.74, stdev=189.15, samples=19 00:18:41.567 lat (usec) : 500=98.94%, 750=1.05%, 1000=0.01% 00:18:41.567 lat (msec) : 2=0.01%, 4=0.01% 00:18:41.567 cpu : usr=85.08%, sys=13.02%, ctx=17, majf=0, minf=9 00:18:41.567 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:41.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.567 issued rwts: total=101288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.567 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:41.567 00:18:41.567 Run status group 0 (all jobs): 00:18:41.567 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=396MiB (415MB), run=10001-10001msec 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.567 00:18:41.567 real 0m10.923s 00:18:41.567 user 0m9.117s 00:18:41.567 sys 0m1.540s 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:41.567 ************************************ 00:18:41.567 END TEST fio_dif_1_default 00:18:41.567 ************************************ 00:18:41.567 12:53:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:41.567 12:53:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:41.567 12:53:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.567 12:53:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:41.567 ************************************ 00:18:41.567 START TEST fio_dif_1_multi_subsystems 00:18:41.567 ************************************ 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.567 bdev_null0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.567 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.568 [2024-11-15 12:53:48.713117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.568 bdev_null1 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:41.568 { 00:18:41.568 "params": { 00:18:41.568 "name": "Nvme$subsystem", 00:18:41.568 "trtype": "$TEST_TRANSPORT", 00:18:41.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.568 "adrfam": "ipv4", 00:18:41.568 "trsvcid": "$NVMF_PORT", 00:18:41.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.568 "hdgst": ${hdgst:-false}, 00:18:41.568 "ddgst": ${ddgst:-false} 00:18:41.568 }, 00:18:41.568 "method": "bdev_nvme_attach_controller" 00:18:41.568 } 00:18:41.568 EOF 00:18:41.568 )") 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:41.568 { 00:18:41.568 "params": { 00:18:41.568 "name": "Nvme$subsystem", 00:18:41.568 "trtype": "$TEST_TRANSPORT", 00:18:41.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.568 "adrfam": "ipv4", 00:18:41.568 "trsvcid": "$NVMF_PORT", 00:18:41.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.568 "hdgst": ${hdgst:-false}, 00:18:41.568 "ddgst": ${ddgst:-false} 00:18:41.568 }, 00:18:41.568 "method": "bdev_nvme_attach_controller" 00:18:41.568 } 00:18:41.568 EOF 00:18:41.568 )") 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:41.568 "params": { 00:18:41.568 "name": "Nvme0", 00:18:41.568 "trtype": "tcp", 00:18:41.568 "traddr": "10.0.0.3", 00:18:41.568 "adrfam": "ipv4", 00:18:41.568 "trsvcid": "4420", 00:18:41.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:41.568 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:41.568 "hdgst": false, 00:18:41.568 "ddgst": false 00:18:41.568 }, 00:18:41.568 "method": "bdev_nvme_attach_controller" 00:18:41.568 },{ 00:18:41.568 "params": { 00:18:41.568 "name": "Nvme1", 00:18:41.568 "trtype": "tcp", 00:18:41.568 "traddr": "10.0.0.3", 00:18:41.568 "adrfam": "ipv4", 00:18:41.568 "trsvcid": "4420", 00:18:41.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.568 "hdgst": false, 00:18:41.568 "ddgst": false 00:18:41.568 }, 00:18:41.568 "method": "bdev_nvme_attach_controller" 00:18:41.568 }' 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:41.568 12:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:41.568 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:41.568 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:41.568 fio-3.35 00:18:41.568 Starting 2 threads 00:18:51.549 00:18:51.549 filename0: (groupid=0, jobs=1): err= 0: pid=82496: Fri Nov 15 12:53:59 2024 00:18:51.549 read: IOPS=5395, BW=21.1MiB/s (22.1MB/s)(211MiB/10001msec) 00:18:51.549 slat (nsec): min=6241, max=88233, avg=12719.36, stdev=4664.77 00:18:51.549 clat (usec): min=600, max=1207, avg=706.22, stdev=48.97 00:18:51.549 lat (usec): min=609, max=1232, avg=718.94, stdev=49.68 00:18:51.549 clat percentiles (usec): 00:18:51.549 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 668], 00:18:51.549 | 30.00th=[ 676], 40.00th=[ 685], 50.00th=[ 701], 60.00th=[ 709], 00:18:51.549 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 766], 95.00th=[ 799], 00:18:51.549 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 955], 99.95th=[ 979], 00:18:51.549 | 99.99th=[ 1106] 00:18:51.549 bw ( KiB/s): min=21152, max=22112, per=50.01%, avg=21584.84, stdev=270.98, samples=19 00:18:51.549 iops : min= 5288, max= 5528, avg=5396.21, stdev=67.74, samples=19 00:18:51.549 lat (usec) : 750=84.92%, 1000=15.04% 00:18:51.549 lat (msec) : 2=0.04% 00:18:51.549 cpu : usr=90.45%, sys=8.16%, ctx=8, majf=0, minf=0 00:18:51.549 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.549 issued rwts: total=53960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.549 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:51.549 filename1: (groupid=0, jobs=1): err= 0: pid=82497: Fri Nov 15 12:53:59 2024 00:18:51.549 read: IOPS=5395, BW=21.1MiB/s (22.1MB/s)(211MiB/10001msec) 00:18:51.549 slat (nsec): min=6250, max=83475, avg=12480.79, stdev=4458.13 00:18:51.549 clat (usec): min=557, max=1098, avg=708.08, stdev=53.26 00:18:51.549 lat (usec): min=564, max=1123, avg=720.56, stdev=54.12 00:18:51.549 clat percentiles (usec): 00:18:51.549 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 668], 00:18:51.549 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:18:51.549 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 807], 00:18:51.549 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 988], 00:18:51.549 | 99.99th=[ 1074] 00:18:51.549 bw ( KiB/s): min=21152, max=22112, per=50.01%, avg=21584.84, stdev=270.98, samples=19 00:18:51.549 iops : min= 5288, max= 5528, avg=5396.21, stdev=67.74, samples=19 00:18:51.549 lat (usec) : 750=82.63%, 1000=17.33% 00:18:51.549 lat (msec) : 2=0.04% 00:18:51.549 cpu : usr=90.62%, sys=8.06%, ctx=6, majf=0, minf=0 00:18:51.549 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.549 issued rwts: total=53960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.549 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:51.549 00:18:51.549 Run status group 0 (all jobs): 00:18:51.549 READ: bw=42.2MiB/s (44.2MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=422MiB (442MB), run=10001-10001msec 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.549 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.550 00:18:51.550 real 0m11.037s 00:18:51.550 user 0m18.806s 00:18:51.550 sys 0m1.873s 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 ************************************ 00:18:51.550 END TEST fio_dif_1_multi_subsystems 00:18:51.550 ************************************ 00:18:51.550 12:53:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:51.550 12:53:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:51.550 12:53:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 ************************************ 00:18:51.550 START TEST fio_dif_rand_params 00:18:51.550 ************************************ 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 bdev_null0 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:51.550 [2024-11-15 12:53:59.801871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:51.550 { 00:18:51.550 "params": { 00:18:51.550 "name": "Nvme$subsystem", 00:18:51.550 "trtype": "$TEST_TRANSPORT", 00:18:51.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.550 "adrfam": "ipv4", 00:18:51.550 "trsvcid": "$NVMF_PORT", 00:18:51.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.550 "hdgst": ${hdgst:-false}, 00:18:51.550 "ddgst": ${ddgst:-false} 00:18:51.550 }, 00:18:51.550 "method": "bdev_nvme_attach_controller" 00:18:51.550 } 00:18:51.550 EOF 00:18:51.550 )") 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:51.550 "params": { 00:18:51.550 "name": "Nvme0", 00:18:51.550 "trtype": "tcp", 00:18:51.550 "traddr": "10.0.0.3", 00:18:51.550 "adrfam": "ipv4", 00:18:51.550 "trsvcid": "4420", 00:18:51.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:51.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:51.550 "hdgst": false, 00:18:51.550 "ddgst": false 00:18:51.550 }, 00:18:51.550 "method": "bdev_nvme_attach_controller" 00:18:51.550 }' 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:51.550 12:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:51.550 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:51.550 ... 00:18:51.550 fio-3.35 00:18:51.550 Starting 3 threads 00:18:58.116 00:18:58.116 filename0: (groupid=0, jobs=1): err= 0: pid=82653: Fri Nov 15 12:54:05 2024 00:18:58.116 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(177MiB/5002msec) 00:18:58.116 slat (nsec): min=6499, max=57624, avg=9497.59, stdev=4087.32 00:18:58.116 clat (usec): min=4400, max=12233, avg=10550.89, stdev=502.48 00:18:58.116 lat (usec): min=4409, max=12260, avg=10560.39, stdev=502.75 00:18:58.116 clat percentiles (usec): 00:18:58.116 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10159], 20.00th=[10159], 00:18:58.116 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:18:58.116 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:18:58.116 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:18:58.116 | 99.99th=[12256] 00:18:58.116 bw ( KiB/s): min=34560, max=37632, per=33.31%, avg=36266.67, stdev=923.02, samples=9 00:18:58.116 iops : min= 270, max= 294, avg=283.33, stdev= 7.21, samples=9 00:18:58.116 lat (msec) : 10=0.78%, 20=99.22% 00:18:58.116 cpu : usr=90.80%, sys=8.58%, ctx=9, majf=0, minf=0 00:18:58.116 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.116 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:58.116 filename0: (groupid=0, jobs=1): err= 0: pid=82654: Fri Nov 15 12:54:05 2024 00:18:58.116 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(177MiB/5005msec) 00:18:58.116 slat (nsec): min=4638, max=57248, avg=13105.48, stdev=5258.78 00:18:58.116 clat (usec): min=7483, max=12308, avg=10550.84, stdev=466.01 00:18:58.117 lat (usec): min=7505, max=12320, avg=10563.95, stdev=466.77 00:18:58.117 clat percentiles (usec): 00:18:58.117 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10159], 20.00th=[10159], 00:18:58.117 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:18:58.117 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11338], 00:18:58.117 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12256], 99.95th=[12256], 00:18:58.117 | 99.99th=[12256] 00:18:58.117 bw ( KiB/s): min=35328, max=37632, per=33.23%, avg=36181.33, stdev=809.54, samples=9 00:18:58.117 iops : min= 276, max= 294, avg=282.67, stdev= 6.32, samples=9 00:18:58.117 lat (msec) : 10=1.13%, 20=98.87% 00:18:58.117 cpu : usr=90.53%, sys=8.53%, ctx=53, majf=0, minf=0 00:18:58.117 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.117 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:58.117 filename0: (groupid=0, jobs=1): err= 0: pid=82655: Fri Nov 15 12:54:05 2024 00:18:58.117 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(177MiB/5005msec) 00:18:58.117 slat (nsec): min=3744, max=59737, avg=13093.12, stdev=5064.52 00:18:58.117 clat (usec): min=7535, max=12306, avg=10550.70, stdev=447.96 00:18:58.117 lat (usec): min=7541, max=12320, avg=10563.79, stdev=448.58 00:18:58.117 clat percentiles (usec): 00:18:58.117 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10159], 20.00th=[10159], 00:18:58.117 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:18:58.117 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11338], 00:18:58.117 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:18:58.117 | 99.99th=[12256] 00:18:58.117 bw ( KiB/s): min=35328, max=37632, per=33.23%, avg=36181.33, stdev=809.54, samples=9 00:18:58.117 iops : min= 276, max= 294, avg=282.67, stdev= 6.32, samples=9 00:18:58.117 lat (msec) : 10=1.20%, 20=98.80% 00:18:58.117 cpu : usr=92.11%, sys=7.17%, ctx=5, majf=0, minf=0 00:18:58.117 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.117 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:58.117 00:18:58.117 Run status group 0 (all jobs): 00:18:58.117 READ: bw=106MiB/s (111MB/s), 35.4MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=532MiB (558MB), run=5002-5005msec 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 bdev_null0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 [2024-11-15 12:54:05.737227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 bdev_null1 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 bdev_null2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.117 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.117 { 00:18:58.117 "params": { 00:18:58.118 "name": "Nvme$subsystem", 00:18:58.118 "trtype": "$TEST_TRANSPORT", 00:18:58.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.118 "adrfam": "ipv4", 00:18:58.118 "trsvcid": "$NVMF_PORT", 00:18:58.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.118 "hdgst": ${hdgst:-false}, 00:18:58.118 "ddgst": ${ddgst:-false} 00:18:58.118 }, 00:18:58.118 "method": "bdev_nvme_attach_controller" 00:18:58.118 } 00:18:58.118 EOF 00:18:58.118 )") 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.118 { 00:18:58.118 "params": { 00:18:58.118 "name": "Nvme$subsystem", 00:18:58.118 "trtype": "$TEST_TRANSPORT", 00:18:58.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.118 "adrfam": "ipv4", 00:18:58.118 "trsvcid": "$NVMF_PORT", 00:18:58.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.118 "hdgst": ${hdgst:-false}, 00:18:58.118 "ddgst": ${ddgst:-false} 00:18:58.118 }, 00:18:58.118 "method": "bdev_nvme_attach_controller" 00:18:58.118 } 00:18:58.118 EOF 00:18:58.118 )") 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.118 { 00:18:58.118 "params": { 00:18:58.118 "name": "Nvme$subsystem", 00:18:58.118 "trtype": "$TEST_TRANSPORT", 00:18:58.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.118 "adrfam": "ipv4", 00:18:58.118 "trsvcid": "$NVMF_PORT", 00:18:58.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.118 "hdgst": ${hdgst:-false}, 00:18:58.118 "ddgst": ${ddgst:-false} 00:18:58.118 }, 00:18:58.118 "method": "bdev_nvme_attach_controller" 00:18:58.118 } 00:18:58.118 EOF 00:18:58.118 )") 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:58.118 "params": { 00:18:58.118 "name": "Nvme0", 00:18:58.118 "trtype": "tcp", 00:18:58.118 "traddr": "10.0.0.3", 00:18:58.118 "adrfam": "ipv4", 00:18:58.118 "trsvcid": "4420", 00:18:58.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:58.118 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:58.118 "hdgst": false, 00:18:58.118 "ddgst": false 00:18:58.118 }, 00:18:58.118 "method": "bdev_nvme_attach_controller" 00:18:58.118 },{ 00:18:58.118 "params": { 00:18:58.118 "name": "Nvme1", 00:18:58.118 "trtype": "tcp", 00:18:58.118 "traddr": "10.0.0.3", 00:18:58.118 "adrfam": "ipv4", 00:18:58.118 "trsvcid": "4420", 00:18:58.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.118 "hdgst": false, 00:18:58.118 "ddgst": false 00:18:58.118 }, 00:18:58.118 "method": "bdev_nvme_attach_controller" 00:18:58.118 },{ 00:18:58.118 "params": { 00:18:58.118 "name": "Nvme2", 00:18:58.118 "trtype": "tcp", 00:18:58.118 "traddr": "10.0.0.3", 00:18:58.118 "adrfam": "ipv4", 00:18:58.118 "trsvcid": "4420", 00:18:58.118 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:58.118 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:58.118 "hdgst": false, 00:18:58.118 "ddgst": false 00:18:58.118 }, 00:18:58.118 "method": "bdev_nvme_attach_controller" 00:18:58.118 }' 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:58.118 12:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.118 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:58.118 ... 00:18:58.118 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:58.118 ... 00:18:58.118 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:58.118 ... 00:18:58.118 fio-3.35 00:18:58.118 Starting 24 threads 00:19:08.187 00:19:08.187 filename0: (groupid=0, jobs=1): err= 0: pid=82750: Fri Nov 15 12:54:16 2024 00:19:08.187 read: IOPS=216, BW=865KiB/s (886kB/s)(8656KiB/10004msec) 00:19:08.187 slat (usec): min=4, max=9027, avg=25.98, stdev=277.06 00:19:08.187 clat (msec): min=4, max=167, avg=73.83, stdev=23.53 00:19:08.187 lat (msec): min=4, max=167, avg=73.85, stdev=23.53 00:19:08.187 clat percentiles (msec): 00:19:08.187 | 1.00th=[ 9], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:19:08.187 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:19:08.187 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 109], 95.00th=[ 120], 00:19:08.187 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 153], 99.95th=[ 167], 00:19:08.187 | 99.99th=[ 167] 00:19:08.187 bw ( KiB/s): min= 640, max= 1000, per=4.11%, avg=851.79, stdev=94.52, samples=19 00:19:08.187 iops : min= 160, max= 250, avg=212.95, stdev=23.63, samples=19 00:19:08.187 lat (msec) : 10=1.06%, 20=1.06%, 50=14.28%, 100=67.79%, 250=15.80% 00:19:08.187 cpu : usr=38.19%, sys=2.34%, ctx=1211, majf=0, minf=9 00:19:08.187 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=80.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:08.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.187 filename0: (groupid=0, jobs=1): err= 0: pid=82751: Fri Nov 15 12:54:16 2024 00:19:08.187 read: IOPS=213, BW=854KiB/s (875kB/s)(8572KiB/10033msec) 00:19:08.187 slat (usec): min=3, max=8020, avg=25.22, stdev=281.92 00:19:08.187 clat (msec): min=14, max=155, avg=74.70, stdev=22.84 00:19:08.187 lat (msec): min=14, max=155, avg=74.73, stdev=22.83 00:19:08.187 clat percentiles (msec): 00:19:08.187 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 55], 00:19:08.187 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:19:08.187 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 117], 00:19:08.187 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 155], 00:19:08.187 | 99.99th=[ 155] 00:19:08.187 bw ( KiB/s): min= 616, max= 1155, per=4.12%, avg=852.55, stdev=121.04, samples=20 00:19:08.187 iops : min= 154, max= 288, avg=213.10, stdev=30.16, samples=20 00:19:08.187 lat (msec) : 20=0.09%, 50=16.61%, 100=66.68%, 250=16.61% 00:19:08.187 cpu : usr=35.30%, sys=1.95%, ctx=1159, majf=0, minf=9 00:19:08.187 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:08.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.187 filename0: (groupid=0, jobs=1): err= 0: pid=82752: Fri Nov 15 12:54:16 2024 00:19:08.187 read: IOPS=216, BW=864KiB/s (885kB/s)(8672KiB/10033msec) 00:19:08.187 slat (usec): min=3, max=8025, avg=20.97, stdev=243.32 00:19:08.187 clat (msec): min=13, max=155, avg=73.92, stdev=23.09 00:19:08.187 lat (msec): min=13, max=155, avg=73.94, stdev=23.09 00:19:08.187 clat percentiles (msec): 00:19:08.187 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 52], 00:19:08.187 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:19:08.187 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 118], 00:19:08.187 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:19:08.187 | 99.99th=[ 157] 00:19:08.187 bw ( KiB/s): min= 584, max= 1189, per=4.16%, avg=861.45, stdev=126.16, samples=20 00:19:08.187 iops : min= 146, max= 297, avg=215.35, stdev=31.51, samples=20 00:19:08.187 lat (msec) : 20=0.09%, 50=18.59%, 100=65.41%, 250=15.91% 00:19:08.187 cpu : usr=32.06%, sys=2.05%, ctx=948, majf=0, minf=0 00:19:08.187 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:08.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.187 filename0: (groupid=0, jobs=1): err= 0: pid=82753: Fri Nov 15 12:54:16 2024 00:19:08.187 read: IOPS=219, BW=879KiB/s (900kB/s)(8800KiB/10012msec) 00:19:08.187 slat (usec): min=3, max=8032, avg=27.51, stdev=302.78 00:19:08.187 clat (msec): min=10, max=168, avg=72.67, stdev=22.44 00:19:08.187 lat (msec): min=10, max=168, avg=72.70, stdev=22.45 00:19:08.187 clat percentiles (msec): 00:19:08.187 | 1.00th=[ 30], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:19:08.187 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:19:08.187 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 117], 00:19:08.187 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 169], 00:19:08.187 | 99.99th=[ 169] 00:19:08.187 bw ( KiB/s): min= 712, max= 1024, per=4.23%, avg=876.63, stdev=90.44, samples=19 00:19:08.187 iops : min= 178, max= 256, avg=219.11, stdev=22.57, samples=19 00:19:08.187 lat (msec) : 20=0.73%, 50=18.41%, 100=65.77%, 250=15.09% 00:19:08.187 cpu : usr=35.45%, sys=2.05%, ctx=1037, majf=0, minf=9 00:19:08.187 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:08.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.187 filename0: (groupid=0, jobs=1): err= 0: pid=82754: Fri Nov 15 12:54:16 2024 00:19:08.187 read: IOPS=214, BW=857KiB/s (878kB/s)(8584KiB/10011msec) 00:19:08.187 slat (usec): min=4, max=8033, avg=36.87, stdev=423.16 00:19:08.187 clat (msec): min=11, max=154, avg=74.41, stdev=22.92 00:19:08.187 lat (msec): min=11, max=154, avg=74.45, stdev=22.95 00:19:08.187 clat percentiles (msec): 00:19:08.187 | 1.00th=[ 25], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 52], 00:19:08.187 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:19:08.187 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 118], 00:19:08.187 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 155], 99.95th=[ 155], 00:19:08.187 | 99.99th=[ 155] 00:19:08.187 bw ( KiB/s): min= 672, max= 1039, per=4.13%, avg=854.42, stdev=109.80, samples=19 00:19:08.187 iops : min= 168, max= 259, avg=213.53, stdev=27.42, samples=19 00:19:08.187 lat (msec) : 20=0.75%, 50=17.47%, 100=67.75%, 250=14.03% 00:19:08.187 cpu : usr=31.59%, sys=1.66%, ctx=885, majf=0, minf=9 00:19:08.187 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:08.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.187 filename0: (groupid=0, jobs=1): err= 0: pid=82755: Fri Nov 15 12:54:16 2024 00:19:08.187 read: IOPS=215, BW=863KiB/s (883kB/s)(8644KiB/10020msec) 00:19:08.187 slat (usec): min=3, max=9029, avg=26.09, stdev=252.83 00:19:08.187 clat (msec): min=19, max=146, avg=74.04, stdev=22.11 00:19:08.187 lat (msec): min=19, max=146, avg=74.06, stdev=22.11 00:19:08.187 clat percentiles (msec): 00:19:08.187 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 54], 00:19:08.187 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:19:08.187 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 114], 00:19:08.187 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 146], 00:19:08.187 | 99.99th=[ 146] 00:19:08.187 bw ( KiB/s): min= 656, max= 1136, per=4.15%, avg=860.25, stdev=120.40, samples=20 00:19:08.187 iops : min= 164, max= 284, avg=215.05, stdev=30.08, samples=20 00:19:08.187 lat (msec) : 20=0.65%, 50=14.95%, 100=68.86%, 250=15.55% 00:19:08.187 cpu : usr=42.57%, sys=2.31%, ctx=1445, majf=0, minf=9 00:19:08.187 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:08.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.187 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.187 filename0: (groupid=0, jobs=1): err= 0: pid=82756: Fri Nov 15 12:54:16 2024 00:19:08.187 read: IOPS=219, BW=879KiB/s (901kB/s)(8828KiB/10038msec) 00:19:08.187 slat (usec): min=7, max=4025, avg=18.76, stdev=120.78 00:19:08.187 clat (msec): min=9, max=157, avg=72.65, stdev=23.97 00:19:08.187 lat (msec): min=9, max=157, avg=72.66, stdev=23.97 00:19:08.187 clat percentiles (msec): 00:19:08.187 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 52], 00:19:08.188 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:19:08.188 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 115], 00:19:08.188 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 157], 99.95th=[ 157], 00:19:08.188 | 99.99th=[ 157] 00:19:08.188 bw ( KiB/s): min= 640, max= 1415, per=4.23%, avg=876.75, stdev=163.21, samples=20 00:19:08.188 iops : min= 160, max= 353, avg=219.15, stdev=40.67, samples=20 00:19:08.188 lat (msec) : 10=0.63%, 20=0.91%, 50=16.72%, 100=66.88%, 250=14.86% 00:19:08.188 cpu : usr=40.91%, sys=2.05%, ctx=1522, majf=0, minf=9 00:19:08.188 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename0: (groupid=0, jobs=1): err= 0: pid=82757: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=216, BW=866KiB/s (887kB/s)(8688KiB/10035msec) 00:19:08.188 slat (usec): min=8, max=10020, avg=27.57, stdev=335.43 00:19:08.188 clat (msec): min=7, max=154, avg=73.78, stdev=23.39 00:19:08.188 lat (msec): min=7, max=154, avg=73.80, stdev=23.40 00:19:08.188 clat percentiles (msec): 00:19:08.188 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 57], 00:19:08.188 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:19:08.188 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 116], 00:19:08.188 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 155], 00:19:08.188 | 99.99th=[ 155] 00:19:08.188 bw ( KiB/s): min= 664, max= 1502, per=4.16%, avg=862.70, stdev=175.08, samples=20 00:19:08.188 iops : min= 166, max= 375, avg=215.65, stdev=43.67, samples=20 00:19:08.188 lat (msec) : 10=0.09%, 20=2.35%, 50=12.66%, 100=70.03%, 250=14.87% 00:19:08.188 cpu : usr=33.47%, sys=1.99%, ctx=965, majf=0, minf=9 00:19:08.188 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename1: (groupid=0, jobs=1): err= 0: pid=82758: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=208, BW=833KiB/s (853kB/s)(8356KiB/10035msec) 00:19:08.188 slat (usec): min=6, max=8020, avg=23.53, stdev=262.75 00:19:08.188 clat (msec): min=9, max=168, avg=76.65, stdev=24.61 00:19:08.188 lat (msec): min=9, max=168, avg=76.67, stdev=24.61 00:19:08.188 clat percentiles (msec): 00:19:08.188 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 61], 00:19:08.188 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:19:08.188 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 121], 00:19:08.188 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 169], 00:19:08.188 | 99.99th=[ 169] 00:19:08.188 bw ( KiB/s): min= 656, max= 1399, per=4.01%, avg=831.15, stdev=157.88, samples=20 00:19:08.188 iops : min= 164, max= 349, avg=207.75, stdev=39.33, samples=20 00:19:08.188 lat (msec) : 10=0.67%, 20=1.53%, 50=11.25%, 100=69.46%, 250=17.09% 00:19:08.188 cpu : usr=36.48%, sys=2.14%, ctx=1119, majf=0, minf=9 00:19:08.188 IO depths : 1=0.1%, 2=1.6%, 4=6.6%, 8=76.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=89.3%, 8=9.3%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=2089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename1: (groupid=0, jobs=1): err= 0: pid=82759: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=223, BW=895KiB/s (917kB/s)(8956KiB/10005msec) 00:19:08.188 slat (usec): min=3, max=8028, avg=21.11, stdev=199.73 00:19:08.188 clat (msec): min=9, max=146, avg=71.41, stdev=21.95 00:19:08.188 lat (msec): min=9, max=146, avg=71.43, stdev=21.96 00:19:08.188 clat percentiles (msec): 00:19:08.188 | 1.00th=[ 22], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:19:08.188 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:19:08.188 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 115], 00:19:08.188 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 146], 00:19:08.188 | 99.99th=[ 146] 00:19:08.188 bw ( KiB/s): min= 768, max= 1072, per=4.31%, avg=892.32, stdev=86.40, samples=19 00:19:08.188 iops : min= 192, max= 268, avg=223.05, stdev=21.57, samples=19 00:19:08.188 lat (msec) : 10=0.22%, 20=0.54%, 50=17.82%, 100=68.69%, 250=12.73% 00:19:08.188 cpu : usr=39.21%, sys=2.13%, ctx=1265, majf=0, minf=9 00:19:08.188 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename1: (groupid=0, jobs=1): err= 0: pid=82760: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=193, BW=772KiB/s (791kB/s)(7724KiB/10004msec) 00:19:08.188 slat (usec): min=4, max=10109, avg=27.47, stdev=295.93 00:19:08.188 clat (msec): min=4, max=157, avg=82.68, stdev=24.78 00:19:08.188 lat (msec): min=4, max=157, avg=82.71, stdev=24.78 00:19:08.188 clat percentiles (msec): 00:19:08.188 | 1.00th=[ 13], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 70], 00:19:08.188 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 82], 00:19:08.188 | 70.00th=[ 92], 80.00th=[ 106], 90.00th=[ 117], 95.00th=[ 128], 00:19:08.188 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:19:08.188 | 99.99th=[ 157] 00:19:08.188 bw ( KiB/s): min= 512, max= 976, per=3.65%, avg=756.63, stdev=124.69, samples=19 00:19:08.188 iops : min= 128, max= 244, avg=189.16, stdev=31.17, samples=19 00:19:08.188 lat (msec) : 10=0.98%, 20=0.98%, 50=5.49%, 100=67.79%, 250=24.75% 00:19:08.188 cpu : usr=41.81%, sys=2.12%, ctx=1300, majf=0, minf=9 00:19:08.188 IO depths : 1=0.1%, 2=4.2%, 4=16.9%, 8=65.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=91.9%, 8=4.4%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename1: (groupid=0, jobs=1): err= 0: pid=82761: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=213, BW=852KiB/s (872kB/s)(8540KiB/10023msec) 00:19:08.188 slat (usec): min=3, max=8025, avg=29.09, stdev=306.26 00:19:08.188 clat (msec): min=23, max=161, avg=74.94, stdev=23.76 00:19:08.188 lat (msec): min=23, max=161, avg=74.97, stdev=23.76 00:19:08.188 clat percentiles (msec): 00:19:08.188 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:19:08.188 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:19:08.188 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 111], 95.00th=[ 120], 00:19:08.188 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 163], 00:19:08.188 | 99.99th=[ 163] 00:19:08.188 bw ( KiB/s): min= 736, max= 1015, per=4.10%, avg=849.15, stdev=89.15, samples=20 00:19:08.188 iops : min= 184, max= 253, avg=212.25, stdev=22.21, samples=20 00:19:08.188 lat (msec) : 50=16.63%, 100=65.90%, 250=17.47% 00:19:08.188 cpu : usr=34.92%, sys=1.87%, ctx=1060, majf=0, minf=9 00:19:08.188 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename1: (groupid=0, jobs=1): err= 0: pid=82762: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=217, BW=871KiB/s (892kB/s)(8740KiB/10032msec) 00:19:08.188 slat (usec): min=7, max=8024, avg=26.06, stdev=296.59 00:19:08.188 clat (msec): min=19, max=146, avg=73.27, stdev=22.60 00:19:08.188 lat (msec): min=19, max=146, avg=73.30, stdev=22.60 00:19:08.188 clat percentiles (msec): 00:19:08.188 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 50], 00:19:08.188 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:19:08.188 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 117], 00:19:08.188 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:19:08.188 | 99.99th=[ 146] 00:19:08.188 bw ( KiB/s): min= 712, max= 1165, per=4.20%, avg=869.45, stdev=109.73, samples=20 00:19:08.188 iops : min= 178, max= 291, avg=217.35, stdev=27.40, samples=20 00:19:08.188 lat (msec) : 20=0.09%, 50=20.64%, 100=65.68%, 250=13.59% 00:19:08.188 cpu : usr=31.54%, sys=1.81%, ctx=886, majf=0, minf=9 00:19:08.188 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename1: (groupid=0, jobs=1): err= 0: pid=82763: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=216, BW=866KiB/s (886kB/s)(8700KiB/10051msec) 00:19:08.188 slat (usec): min=7, max=10689, avg=29.55, stdev=333.58 00:19:08.188 clat (usec): min=1606, max=150897, avg=73737.64, stdev=27257.87 00:19:08.188 lat (usec): min=1616, max=150911, avg=73767.19, stdev=27260.17 00:19:08.188 clat percentiles (usec): 00:19:08.188 | 1.00th=[ 1778], 5.00th=[ 11207], 10.00th=[ 44827], 20.00th=[ 55313], 00:19:08.188 | 30.00th=[ 66323], 40.00th=[ 71828], 50.00th=[ 73925], 60.00th=[ 79168], 00:19:08.188 | 70.00th=[ 82314], 80.00th=[ 96994], 90.00th=[108528], 95.00th=[115868], 00:19:08.188 | 99.00th=[139461], 99.50th=[143655], 99.90th=[149947], 99.95th=[149947], 00:19:08.188 | 99.99th=[149947] 00:19:08.188 bw ( KiB/s): min= 552, max= 2032, per=4.17%, avg=863.60, stdev=298.75, samples=20 00:19:08.188 iops : min= 138, max= 508, avg=215.90, stdev=74.69, samples=20 00:19:08.188 lat (msec) : 2=2.34%, 4=0.60%, 10=0.83%, 20=2.11%, 50=10.48% 00:19:08.188 lat (msec) : 100=66.76%, 250=16.87% 00:19:08.188 cpu : usr=45.07%, sys=2.29%, ctx=1255, majf=0, minf=9 00:19:08.188 IO depths : 1=0.2%, 2=1.6%, 4=5.8%, 8=76.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:08.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.188 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.188 filename1: (groupid=0, jobs=1): err= 0: pid=82764: Fri Nov 15 12:54:16 2024 00:19:08.188 read: IOPS=213, BW=853KiB/s (874kB/s)(8564KiB/10035msec) 00:19:08.188 slat (usec): min=6, max=8022, avg=17.83, stdev=173.14 00:19:08.188 clat (msec): min=9, max=168, avg=74.81, stdev=24.08 00:19:08.188 lat (msec): min=9, max=168, avg=74.82, stdev=24.08 00:19:08.188 clat percentiles (msec): 00:19:08.188 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 54], 00:19:08.188 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 79], 00:19:08.188 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 117], 00:19:08.188 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 169], 00:19:08.188 | 99.99th=[ 169] 00:19:08.188 bw ( KiB/s): min= 656, max= 1272, per=4.11%, avg=851.60, stdev=141.10, samples=20 00:19:08.188 iops : min= 164, max= 318, avg=212.90, stdev=35.28, samples=20 00:19:08.188 lat (msec) : 10=0.65%, 20=1.59%, 50=12.66%, 100=67.49%, 250=17.61% 00:19:08.188 cpu : usr=42.86%, sys=2.34%, ctx=1346, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=75.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename1: (groupid=0, jobs=1): err= 0: pid=82765: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=219, BW=879KiB/s (900kB/s)(8788KiB/10002msec) 00:19:08.189 slat (usec): min=3, max=4022, avg=16.85, stdev=85.63 00:19:08.189 clat (usec): min=1221, max=143044, avg=72753.21, stdev=23861.94 00:19:08.189 lat (usec): min=1228, max=143052, avg=72770.06, stdev=23862.17 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 3], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 53], 00:19:08.189 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 77], 00:19:08.189 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 114], 00:19:08.189 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 144], 00:19:08.189 | 99.99th=[ 144] 00:19:08.189 bw ( KiB/s): min= 656, max= 1024, per=4.12%, avg=853.47, stdev=89.76, samples=19 00:19:08.189 iops : min= 164, max= 256, avg=213.37, stdev=22.44, samples=19 00:19:08.189 lat (msec) : 2=0.59%, 4=0.91%, 10=0.68%, 20=1.00%, 50=12.70% 00:19:08.189 lat (msec) : 100=68.87%, 250=15.25% 00:19:08.189 cpu : usr=42.57%, sys=2.19%, ctx=1539, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=88.7%, 8=9.8%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82766: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=227, BW=910KiB/s (932kB/s)(9100KiB/10002msec) 00:19:08.189 slat (nsec): min=4557, max=37958, avg=14696.54, stdev=4534.30 00:19:08.189 clat (msec): min=2, max=143, avg=70.28, stdev=23.46 00:19:08.189 lat (msec): min=2, max=143, avg=70.29, stdev=23.46 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 5], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 48], 00:19:08.189 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:19:08.189 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 118], 00:19:08.189 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:19:08.189 | 99.99th=[ 144] 00:19:08.189 bw ( KiB/s): min= 712, max= 1048, per=4.31%, avg=893.47, stdev=93.68, samples=19 00:19:08.189 iops : min= 178, max= 262, avg=223.37, stdev=23.42, samples=19 00:19:08.189 lat (msec) : 4=0.66%, 10=0.88%, 20=0.97%, 50=22.90%, 100=62.07% 00:19:08.189 lat (msec) : 250=12.53% 00:19:08.189 cpu : usr=31.60%, sys=1.74%, ctx=886, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82767: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=215, BW=861KiB/s (881kB/s)(8636KiB/10034msec) 00:19:08.189 slat (usec): min=3, max=8024, avg=24.49, stdev=298.44 00:19:08.189 clat (msec): min=8, max=155, avg=74.21, stdev=23.98 00:19:08.189 lat (msec): min=8, max=155, avg=74.23, stdev=23.98 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 54], 00:19:08.189 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:19:08.189 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:19:08.189 | 99.00th=[ 128], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 148], 00:19:08.189 | 99.99th=[ 157] 00:19:08.189 bw ( KiB/s): min= 632, max= 1423, per=4.14%, avg=857.95, stdev=165.79, samples=20 00:19:08.189 iops : min= 158, max= 355, avg=214.45, stdev=41.31, samples=20 00:19:08.189 lat (msec) : 10=0.74%, 20=1.57%, 50=14.82%, 100=66.51%, 250=16.35% 00:19:08.189 cpu : usr=35.36%, sys=2.03%, ctx=984, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=0.2%, 4=1.1%, 8=82.0%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82768: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=212, BW=851KiB/s (871kB/s)(8532KiB/10025msec) 00:19:08.189 slat (usec): min=5, max=12029, avg=32.92, stdev=387.63 00:19:08.189 clat (msec): min=18, max=155, avg=74.99, stdev=21.69 00:19:08.189 lat (msec): min=18, max=155, avg=75.02, stdev=21.69 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 28], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:19:08.189 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 78], 00:19:08.189 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 109], 95.00th=[ 115], 00:19:08.189 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:19:08.189 | 99.99th=[ 157] 00:19:08.189 bw ( KiB/s): min= 688, max= 1015, per=4.10%, avg=848.95, stdev=89.44, samples=20 00:19:08.189 iops : min= 172, max= 253, avg=212.20, stdev=22.29, samples=20 00:19:08.189 lat (msec) : 20=0.66%, 50=14.35%, 100=69.67%, 250=15.33% 00:19:08.189 cpu : usr=36.90%, sys=2.05%, ctx=1151, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82769: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=224, BW=899KiB/s (920kB/s)(8996KiB/10009msec) 00:19:08.189 slat (usec): min=4, max=4035, avg=18.30, stdev=91.78 00:19:08.189 clat (msec): min=9, max=147, avg=71.11, stdev=22.20 00:19:08.189 lat (msec): min=9, max=147, avg=71.13, stdev=22.21 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 25], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:19:08.189 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:19:08.189 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 114], 00:19:08.189 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 148], 99.95th=[ 148], 00:19:08.189 | 99.99th=[ 148] 00:19:08.189 bw ( KiB/s): min= 720, max= 1043, per=4.32%, avg=895.32, stdev=87.63, samples=19 00:19:08.189 iops : min= 180, max= 260, avg=223.79, stdev=21.84, samples=19 00:19:08.189 lat (msec) : 10=0.18%, 20=0.80%, 50=18.54%, 100=67.59%, 250=12.89% 00:19:08.189 cpu : usr=41.16%, sys=2.34%, ctx=1210, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82770: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=212, BW=849KiB/s (869kB/s)(8504KiB/10019msec) 00:19:08.189 slat (usec): min=4, max=8026, avg=18.73, stdev=173.82 00:19:08.189 clat (msec): min=24, max=148, avg=75.27, stdev=21.65 00:19:08.189 lat (msec): min=24, max=148, avg=75.29, stdev=21.65 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 29], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:19:08.189 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:19:08.189 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 114], 00:19:08.189 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:19:08.189 | 99.99th=[ 148] 00:19:08.189 bw ( KiB/s): min= 640, max= 1024, per=4.07%, avg=843.60, stdev=101.41, samples=20 00:19:08.189 iops : min= 160, max= 256, avg=210.90, stdev=25.35, samples=20 00:19:08.189 lat (msec) : 50=17.45%, 100=67.78%, 250=14.77% 00:19:08.189 cpu : usr=34.59%, sys=1.91%, ctx=955, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=88.8%, 8=9.8%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82771: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=220, BW=882KiB/s (904kB/s)(8844KiB/10022msec) 00:19:08.189 slat (usec): min=3, max=4022, avg=16.91, stdev=85.36 00:19:08.189 clat (msec): min=24, max=152, avg=72.42, stdev=21.39 00:19:08.189 lat (msec): min=24, max=152, avg=72.44, stdev=21.38 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:19:08.189 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:19:08.189 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 113], 00:19:08.189 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 153], 99.95th=[ 153], 00:19:08.189 | 99.99th=[ 153] 00:19:08.189 bw ( KiB/s): min= 768, max= 1096, per=4.25%, avg=879.70, stdev=93.27, samples=20 00:19:08.189 iops : min= 192, max= 274, avg=219.90, stdev=23.31, samples=20 00:19:08.189 lat (msec) : 50=21.03%, 100=66.62%, 250=12.35% 00:19:08.189 cpu : usr=35.15%, sys=1.81%, ctx=1019, majf=0, minf=10 00:19:08.189 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82772: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=215, BW=860KiB/s (881kB/s)(8628KiB/10032msec) 00:19:08.189 slat (usec): min=5, max=8029, avg=27.67, stdev=310.87 00:19:08.189 clat (msec): min=20, max=151, avg=74.20, stdev=22.03 00:19:08.189 lat (msec): min=20, max=151, avg=74.22, stdev=22.03 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 31], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:19:08.189 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:19:08.189 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 116], 00:19:08.189 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:19:08.189 | 99.99th=[ 153] 00:19:08.189 bw ( KiB/s): min= 664, max= 992, per=4.15%, avg=858.10, stdev=93.39, samples=20 00:19:08.189 iops : min= 166, max= 248, avg=214.50, stdev=23.31, samples=20 00:19:08.189 lat (msec) : 50=17.52%, 100=68.10%, 250=14.37% 00:19:08.189 cpu : usr=33.30%, sys=2.03%, ctx=996, majf=0, minf=9 00:19:08.189 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:08.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.189 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.189 filename2: (groupid=0, jobs=1): err= 0: pid=82773: Fri Nov 15 12:54:16 2024 00:19:08.189 read: IOPS=224, BW=899KiB/s (920kB/s)(8992KiB/10005msec) 00:19:08.189 slat (usec): min=3, max=8024, avg=18.85, stdev=169.00 00:19:08.189 clat (msec): min=7, max=151, avg=71.12, stdev=22.90 00:19:08.189 lat (msec): min=7, max=151, avg=71.14, stdev=22.90 00:19:08.189 clat percentiles (msec): 00:19:08.189 | 1.00th=[ 16], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 50], 00:19:08.189 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:19:08.189 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 116], 00:19:08.189 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 153], 99.95th=[ 153], 00:19:08.189 | 99.99th=[ 153] 00:19:08.189 bw ( KiB/s): min= 712, max= 1048, per=4.32%, avg=894.32, stdev=91.43, samples=19 00:19:08.189 iops : min= 178, max= 262, avg=223.58, stdev=22.86, samples=19 00:19:08.190 lat (msec) : 10=0.27%, 20=0.85%, 50=21.17%, 100=64.68%, 250=13.03% 00:19:08.190 cpu : usr=37.28%, sys=1.81%, ctx=980, majf=0, minf=9 00:19:08.190 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:08.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.190 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.190 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:08.190 00:19:08.190 Run status group 0 (all jobs): 00:19:08.190 READ: bw=20.2MiB/s (21.2MB/s), 772KiB/s-910KiB/s (791kB/s-932kB/s), io=203MiB (213MB), run=10002-10051msec 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 bdev_null0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 [2024-11-15 12:54:16.998887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 bdev_null1 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.451 { 00:19:08.451 "params": { 00:19:08.451 "name": "Nvme$subsystem", 00:19:08.451 "trtype": "$TEST_TRANSPORT", 00:19:08.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.451 "adrfam": "ipv4", 00:19:08.451 "trsvcid": "$NVMF_PORT", 00:19:08.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.451 "hdgst": ${hdgst:-false}, 00:19:08.451 "ddgst": ${ddgst:-false} 00:19:08.451 }, 00:19:08.451 "method": "bdev_nvme_attach_controller" 00:19:08.451 } 00:19:08.451 EOF 00:19:08.451 )") 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:08.451 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.452 { 00:19:08.452 "params": { 00:19:08.452 "name": "Nvme$subsystem", 00:19:08.452 "trtype": "$TEST_TRANSPORT", 00:19:08.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.452 "adrfam": "ipv4", 00:19:08.452 "trsvcid": "$NVMF_PORT", 00:19:08.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.452 "hdgst": ${hdgst:-false}, 00:19:08.452 "ddgst": ${ddgst:-false} 00:19:08.452 }, 00:19:08.452 "method": "bdev_nvme_attach_controller" 00:19:08.452 } 00:19:08.452 EOF 00:19:08.452 )") 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:08.452 "params": { 00:19:08.452 "name": "Nvme0", 00:19:08.452 "trtype": "tcp", 00:19:08.452 "traddr": "10.0.0.3", 00:19:08.452 "adrfam": "ipv4", 00:19:08.452 "trsvcid": "4420", 00:19:08.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:08.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:08.452 "hdgst": false, 00:19:08.452 "ddgst": false 00:19:08.452 }, 00:19:08.452 "method": "bdev_nvme_attach_controller" 00:19:08.452 },{ 00:19:08.452 "params": { 00:19:08.452 "name": "Nvme1", 00:19:08.452 "trtype": "tcp", 00:19:08.452 "traddr": "10.0.0.3", 00:19:08.452 "adrfam": "ipv4", 00:19:08.452 "trsvcid": "4420", 00:19:08.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.452 "hdgst": false, 00:19:08.452 "ddgst": false 00:19:08.452 }, 00:19:08.452 "method": "bdev_nvme_attach_controller" 00:19:08.452 }' 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:08.452 12:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:08.711 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:08.711 ... 00:19:08.711 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:08.711 ... 00:19:08.711 fio-3.35 00:19:08.711 Starting 4 threads 00:19:15.281 00:19:15.281 filename0: (groupid=0, jobs=1): err= 0: pid=82906: Fri Nov 15 12:54:22 2024 00:19:15.281 read: IOPS=2221, BW=17.4MiB/s (18.2MB/s)(86.8MiB/5001msec) 00:19:15.281 slat (nsec): min=6778, max=78427, avg=12909.80, stdev=4333.81 00:19:15.281 clat (usec): min=896, max=5246, avg=3555.18, stdev=455.15 00:19:15.281 lat (usec): min=905, max=5257, avg=3568.09, stdev=455.36 00:19:15.281 clat percentiles (usec): 00:19:15.281 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 3425], 20.00th=[ 3523], 00:19:15.281 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:19:15.281 | 70.00th=[ 3687], 80.00th=[ 3752], 90.00th=[ 3949], 95.00th=[ 4080], 00:19:15.281 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[ 4752], 00:19:15.281 | 99.99th=[ 4948] 00:19:15.281 bw ( KiB/s): min=17024, max=20640, per=25.69%, avg=17856.00, stdev=1139.38, samples=9 00:19:15.281 iops : min= 2128, max= 2580, avg=2232.00, stdev=142.42, samples=9 00:19:15.281 lat (usec) : 1000=0.09% 00:19:15.281 lat (msec) : 2=1.12%, 4=90.68%, 10=8.11% 00:19:15.281 cpu : usr=91.70%, sys=7.36%, ctx=17, majf=0, minf=9 00:19:15.281 IO depths : 1=0.1%, 2=21.1%, 4=52.2%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.281 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.281 issued rwts: total=11108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.281 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:15.281 filename0: (groupid=0, jobs=1): err= 0: pid=82907: Fri Nov 15 12:54:22 2024 00:19:15.281 read: IOPS=2122, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5002msec) 00:19:15.281 slat (nsec): min=3212, max=68782, avg=13758.08, stdev=4103.49 00:19:15.281 clat (usec): min=2113, max=5221, avg=3715.52, stdev=263.00 00:19:15.281 lat (usec): min=2127, max=5232, avg=3729.28, stdev=263.05 00:19:15.281 clat percentiles (usec): 00:19:15.281 | 1.00th=[ 3425], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3556], 00:19:15.281 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:19:15.281 | 70.00th=[ 3720], 80.00th=[ 3851], 90.00th=[ 4146], 95.00th=[ 4228], 00:19:15.281 | 99.00th=[ 4752], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5145], 00:19:15.281 | 99.99th=[ 5211] 00:19:15.281 bw ( KiB/s): min=14749, max=17536, per=24.42%, avg=16970.33, stdev=900.92, samples=9 00:19:15.281 iops : min= 1843, max= 2192, avg=2121.22, stdev=112.81, samples=9 00:19:15.281 lat (msec) : 4=85.63%, 10=14.37% 00:19:15.281 cpu : usr=92.36%, sys=6.66%, ctx=5, majf=0, minf=0 00:19:15.281 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.281 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.281 issued rwts: total=10616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.281 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:15.281 filename1: (groupid=0, jobs=1): err= 0: pid=82908: Fri Nov 15 12:54:22 2024 00:19:15.281 read: IOPS=2122, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5002msec) 00:19:15.281 slat (nsec): min=3186, max=58091, avg=14244.51, stdev=4163.33 00:19:15.281 clat (usec): min=2113, max=5486, avg=3713.04, stdev=263.90 00:19:15.281 lat (usec): min=2124, max=5498, avg=3727.29, stdev=264.06 00:19:15.281 clat percentiles (usec): 00:19:15.281 | 1.00th=[ 3425], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3556], 00:19:15.281 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:19:15.281 | 70.00th=[ 3720], 80.00th=[ 3851], 90.00th=[ 4146], 95.00th=[ 4228], 00:19:15.281 | 99.00th=[ 4752], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5342], 00:19:15.281 | 99.99th=[ 5342] 00:19:15.281 bw ( KiB/s): min=14720, max=17536, per=24.42%, avg=16967.11, stdev=909.86, samples=9 00:19:15.281 iops : min= 1840, max= 2192, avg=2120.89, stdev=113.73, samples=9 00:19:15.281 lat (msec) : 4=85.74%, 10=14.26% 00:19:15.281 cpu : usr=92.04%, sys=6.98%, ctx=108, majf=0, minf=0 00:19:15.281 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.281 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.282 issued rwts: total=10616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.282 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:15.282 filename1: (groupid=0, jobs=1): err= 0: pid=82909: Fri Nov 15 12:54:22 2024 00:19:15.282 read: IOPS=2221, BW=17.4MiB/s (18.2MB/s)(86.8MiB/5002msec) 00:19:15.282 slat (nsec): min=3231, max=58310, avg=13402.28, stdev=4650.12 00:19:15.282 clat (usec): min=691, max=5303, avg=3550.02, stdev=453.43 00:19:15.282 lat (usec): min=699, max=5317, avg=3563.42, stdev=453.46 00:19:15.282 clat percentiles (usec): 00:19:15.282 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 3425], 20.00th=[ 3523], 00:19:15.282 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3621], 00:19:15.282 | 70.00th=[ 3687], 80.00th=[ 3752], 90.00th=[ 3949], 95.00th=[ 4080], 00:19:15.282 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[ 4817], 00:19:15.282 | 99.99th=[ 4948] 00:19:15.282 bw ( KiB/s): min=17024, max=20480, per=25.67%, avg=17841.78, stdev=1077.49, samples=9 00:19:15.282 iops : min= 2128, max= 2560, avg=2230.22, stdev=134.69, samples=9 00:19:15.282 lat (usec) : 750=0.02%, 1000=0.13% 00:19:15.282 lat (msec) : 2=1.16%, 4=90.86%, 10=7.83% 00:19:15.282 cpu : usr=92.62%, sys=6.44%, ctx=77, majf=0, minf=9 00:19:15.282 IO depths : 1=0.1%, 2=21.1%, 4=52.2%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.282 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.282 issued rwts: total=11111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.282 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:15.282 00:19:15.282 Run status group 0 (all jobs): 00:19:15.282 READ: bw=67.9MiB/s (71.2MB/s), 16.6MiB/s-17.4MiB/s (17.4MB/s-18.2MB/s), io=339MiB (356MB), run=5001-5002msec 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 00:19:15.282 real 0m23.220s 00:19:15.282 user 2m3.467s 00:19:15.282 sys 0m8.208s 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.282 12:54:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 ************************************ 00:19:15.282 END TEST fio_dif_rand_params 00:19:15.282 ************************************ 00:19:15.282 12:54:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:15.282 12:54:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.282 12:54:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.282 12:54:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 ************************************ 00:19:15.282 START TEST fio_dif_digest 00:19:15.282 ************************************ 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 bdev_null0 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 [2024-11-15 12:54:23.079362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:15.282 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.282 { 00:19:15.282 "params": { 00:19:15.282 "name": "Nvme$subsystem", 00:19:15.282 "trtype": "$TEST_TRANSPORT", 00:19:15.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.282 "adrfam": "ipv4", 00:19:15.282 "trsvcid": "$NVMF_PORT", 00:19:15.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.283 "hdgst": ${hdgst:-false}, 00:19:15.283 "ddgst": ${ddgst:-false} 00:19:15.283 }, 00:19:15.283 "method": "bdev_nvme_attach_controller" 00:19:15.283 } 00:19:15.283 EOF 00:19:15.283 )") 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:15.283 "params": { 00:19:15.283 "name": "Nvme0", 00:19:15.283 "trtype": "tcp", 00:19:15.283 "traddr": "10.0.0.3", 00:19:15.283 "adrfam": "ipv4", 00:19:15.283 "trsvcid": "4420", 00:19:15.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:15.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:15.283 "hdgst": true, 00:19:15.283 "ddgst": true 00:19:15.283 }, 00:19:15.283 "method": "bdev_nvme_attach_controller" 00:19:15.283 }' 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:15.283 12:54:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:15.283 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:15.283 ... 00:19:15.283 fio-3.35 00:19:15.283 Starting 3 threads 00:19:25.277 00:19:25.277 filename0: (groupid=0, jobs=1): err= 0: pid=83012: Fri Nov 15 12:54:33 2024 00:19:25.277 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(316MiB/10006msec) 00:19:25.277 slat (nsec): min=7166, max=66961, avg=13960.19, stdev=4201.88 00:19:25.277 clat (usec): min=8105, max=13916, avg=11851.94, stdev=440.89 00:19:25.277 lat (usec): min=8118, max=13928, avg=11865.90, stdev=441.27 00:19:25.277 clat percentiles (usec): 00:19:25.277 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:19:25.277 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:19:25.277 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:19:25.277 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[13960], 00:19:25.277 | 99.99th=[13960] 00:19:25.277 bw ( KiB/s): min=31488, max=33024, per=33.31%, avg=32293.00, stdev=541.94, samples=19 00:19:25.277 iops : min= 246, max= 258, avg=252.26, stdev= 4.24, samples=19 00:19:25.277 lat (msec) : 10=0.24%, 20=99.76% 00:19:25.277 cpu : usr=90.57%, sys=8.79%, ctx=7, majf=0, minf=0 00:19:25.277 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.277 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:25.277 filename0: (groupid=0, jobs=1): err= 0: pid=83013: Fri Nov 15 12:54:33 2024 00:19:25.277 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(316MiB/10004msec) 00:19:25.277 slat (nsec): min=6780, max=49260, avg=9666.29, stdev=3962.12 00:19:25.277 clat (usec): min=6537, max=13908, avg=11856.82, stdev=447.90 00:19:25.277 lat (usec): min=6545, max=13920, avg=11866.49, stdev=448.27 00:19:25.277 clat percentiles (usec): 00:19:25.277 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11600], 20.00th=[11600], 00:19:25.277 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:19:25.277 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:19:25.277 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[13960], 00:19:25.277 | 99.99th=[13960] 00:19:25.277 bw ( KiB/s): min=31488, max=33024, per=33.35%, avg=32326.68, stdev=569.63, samples=19 00:19:25.277 iops : min= 246, max= 258, avg=252.53, stdev= 4.46, samples=19 00:19:25.277 lat (msec) : 10=0.12%, 20=99.88% 00:19:25.277 cpu : usr=91.53%, sys=7.85%, ctx=21, majf=0, minf=0 00:19:25.277 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.277 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:25.277 filename0: (groupid=0, jobs=1): err= 0: pid=83014: Fri Nov 15 12:54:33 2024 00:19:25.277 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(316MiB/10005msec) 00:19:25.277 slat (nsec): min=7080, max=64090, avg=14526.56, stdev=4266.32 00:19:25.277 clat (usec): min=8073, max=13913, avg=11850.03, stdev=440.42 00:19:25.277 lat (usec): min=8086, max=13928, avg=11864.55, stdev=440.89 00:19:25.277 clat percentiles (usec): 00:19:25.277 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:19:25.277 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:19:25.277 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:19:25.277 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[13960], 00:19:25.277 | 99.99th=[13960] 00:19:25.277 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32296.42, stdev=541.47, samples=19 00:19:25.277 iops : min= 246, max= 258, avg=252.32, stdev= 4.23, samples=19 00:19:25.277 lat (msec) : 10=0.24%, 20=99.76% 00:19:25.277 cpu : usr=90.89%, sys=8.51%, ctx=10, majf=0, minf=0 00:19:25.277 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.277 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:25.277 00:19:25.277 Run status group 0 (all jobs): 00:19:25.277 READ: bw=94.7MiB/s (99.3MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=947MiB (993MB), run=10004-10006msec 00:19:25.277 12:54:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:25.277 12:54:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:25.278 12:54:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:25.278 12:54:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:25.278 12:54:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:25.278 12:54:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:25.278 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.278 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:25.537 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.537 12:54:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:25.537 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.537 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:25.537 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.537 00:19:25.537 real 0m10.913s 00:19:25.537 user 0m27.936s 00:19:25.537 sys 0m2.733s 00:19:25.537 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.537 12:54:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:25.537 ************************************ 00:19:25.537 END TEST fio_dif_digest 00:19:25.537 ************************************ 00:19:25.537 12:54:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:25.537 12:54:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.537 rmmod nvme_tcp 00:19:25.537 rmmod nvme_fabrics 00:19:25.537 rmmod nvme_keyring 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82277 ']' 00:19:25.537 12:54:34 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82277 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82277 ']' 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82277 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82277 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.537 killing process with pid 82277 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82277' 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82277 00:19:25.537 12:54:34 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82277 00:19:25.796 12:54:34 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:25.796 12:54:34 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:26.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:26.056 Waiting for block devices as requested 00:19:26.056 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.313 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:26.313 12:54:34 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:26.572 12:54:35 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.572 12:54:35 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.572 12:54:35 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:26.572 12:54:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.572 12:54:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:26.572 12:54:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.572 12:54:35 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:19:26.572 00:19:26.572 real 0m58.797s 00:19:26.572 user 3m46.572s 00:19:26.572 sys 0m19.153s 00:19:26.572 12:54:35 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.572 12:54:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:26.572 ************************************ 00:19:26.572 END TEST nvmf_dif 00:19:26.572 ************************************ 00:19:26.572 12:54:35 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:26.572 12:54:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:26.572 12:54:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.572 12:54:35 -- common/autotest_common.sh@10 -- # set +x 00:19:26.572 ************************************ 00:19:26.572 START TEST nvmf_abort_qd_sizes 00:19:26.572 ************************************ 00:19:26.572 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:26.572 * Looking for test storage... 00:19:26.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:26.572 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.572 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.572 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.832 --rc genhtml_branch_coverage=1 00:19:26.832 --rc genhtml_function_coverage=1 00:19:26.832 --rc genhtml_legend=1 00:19:26.832 --rc geninfo_all_blocks=1 00:19:26.832 --rc geninfo_unexecuted_blocks=1 00:19:26.832 00:19:26.832 ' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.832 --rc genhtml_branch_coverage=1 00:19:26.832 --rc genhtml_function_coverage=1 00:19:26.832 --rc genhtml_legend=1 00:19:26.832 --rc geninfo_all_blocks=1 00:19:26.832 --rc geninfo_unexecuted_blocks=1 00:19:26.832 00:19:26.832 ' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.832 --rc genhtml_branch_coverage=1 00:19:26.832 --rc genhtml_function_coverage=1 00:19:26.832 --rc genhtml_legend=1 00:19:26.832 --rc geninfo_all_blocks=1 00:19:26.832 --rc geninfo_unexecuted_blocks=1 00:19:26.832 00:19:26.832 ' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.832 --rc genhtml_branch_coverage=1 00:19:26.832 --rc genhtml_function_coverage=1 00:19:26.832 --rc genhtml_legend=1 00:19:26.832 --rc geninfo_all_blocks=1 00:19:26.832 --rc geninfo_unexecuted_blocks=1 00:19:26.832 00:19:26.832 ' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.832 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:26.832 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:26.833 Cannot find device "nvmf_init_br" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:26.833 Cannot find device "nvmf_init_br2" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:26.833 Cannot find device "nvmf_tgt_br" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.833 Cannot find device "nvmf_tgt_br2" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:26.833 Cannot find device "nvmf_init_br" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:26.833 Cannot find device "nvmf_init_br2" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:26.833 Cannot find device "nvmf_tgt_br" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:26.833 Cannot find device "nvmf_tgt_br2" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:26.833 Cannot find device "nvmf_br" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:26.833 Cannot find device "nvmf_init_if" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:26.833 Cannot find device "nvmf_init_if2" 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:19:26.833 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:27.092 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:27.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:27.093 00:19:27.093 --- 10.0.0.3 ping statistics --- 00:19:27.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.093 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:27.093 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:27.093 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:19:27.093 00:19:27.093 --- 10.0.0.4 ping statistics --- 00:19:27.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.093 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:27.093 00:19:27.093 --- 10.0.0.1 ping statistics --- 00:19:27.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.093 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:27.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:27.093 00:19:27.093 --- 10.0.0.2 ping statistics --- 00:19:27.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.093 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:27.093 12:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:28.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:28.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:28.030 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=83662 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 83662 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 83662 ']' 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.030 12:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:28.289 [2024-11-15 12:54:36.722418] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:19:28.289 [2024-11-15 12:54:36.722513] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.289 [2024-11-15 12:54:36.876226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.289 [2024-11-15 12:54:36.918207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.289 [2024-11-15 12:54:36.918266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.289 [2024-11-15 12:54:36.918281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.289 [2024-11-15 12:54:36.918291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.289 [2024-11-15 12:54:36.918300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.289 [2024-11-15 12:54:36.919217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.289 [2024-11-15 12:54:36.919389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.289 [2024-11-15 12:54:36.919396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.289 [2024-11-15 12:54:36.919271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.289 [2024-11-15 12:54:36.956747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.549 12:54:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 ************************************ 00:19:28.549 START TEST spdk_target_abort 00:19:28.549 ************************************ 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 spdk_targetn1 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 [2024-11-15 12:54:37.168842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 [2024-11-15 12:54:37.209915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.549 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:28.550 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:28.809 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:32.125 Initializing NVMe Controllers 00:19:32.125 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:32.125 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:32.125 Initialization complete. Launching workers. 00:19:32.125 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10102, failed: 0 00:19:32.125 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1030, failed to submit 9072 00:19:32.125 success 694, unsuccessful 336, failed 0 00:19:32.125 12:54:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:32.125 12:54:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:35.413 Initializing NVMe Controllers 00:19:35.413 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:35.413 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:35.413 Initialization complete. Launching workers. 00:19:35.413 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8895, failed: 0 00:19:35.413 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1172, failed to submit 7723 00:19:35.413 success 380, unsuccessful 792, failed 0 00:19:35.413 12:54:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:35.413 12:54:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:38.700 Initializing NVMe Controllers 00:19:38.700 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:38.700 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:38.700 Initialization complete. Launching workers. 00:19:38.700 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31837, failed: 0 00:19:38.700 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2313, failed to submit 29524 00:19:38.700 success 428, unsuccessful 1885, failed 0 00:19:38.700 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:19:38.700 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.700 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:38.700 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.700 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:38.700 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.700 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83662 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 83662 ']' 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 83662 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83662 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.267 killing process with pid 83662 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83662' 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 83662 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 83662 00:19:39.267 00:19:39.267 real 0m10.750s 00:19:39.267 user 0m40.855s 00:19:39.267 sys 0m2.053s 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:39.267 ************************************ 00:19:39.267 END TEST spdk_target_abort 00:19:39.267 ************************************ 00:19:39.267 12:54:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:19:39.267 12:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:39.267 12:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.267 12:54:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:39.267 ************************************ 00:19:39.267 START TEST kernel_target_abort 00:19:39.267 ************************************ 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:39.267 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:39.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:39.836 Waiting for block devices as requested 00:19:39.836 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:39.836 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:40.096 No valid GPT data, bailing 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:40.096 No valid GPT data, bailing 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:40.096 No valid GPT data, bailing 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:40.096 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:40.356 No valid GPT data, bailing 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc --hostid=85bcfa6f-4742-42db-8cde-87c16c4a32fc -a 10.0.0.1 -t tcp -s 4420 00:19:40.356 00:19:40.356 Discovery Log Number of Records 2, Generation counter 2 00:19:40.356 =====Discovery Log Entry 0====== 00:19:40.356 trtype: tcp 00:19:40.356 adrfam: ipv4 00:19:40.356 subtype: current discovery subsystem 00:19:40.356 treq: not specified, sq flow control disable supported 00:19:40.356 portid: 1 00:19:40.356 trsvcid: 4420 00:19:40.356 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:40.356 traddr: 10.0.0.1 00:19:40.356 eflags: none 00:19:40.356 sectype: none 00:19:40.356 =====Discovery Log Entry 1====== 00:19:40.356 trtype: tcp 00:19:40.356 adrfam: ipv4 00:19:40.356 subtype: nvme subsystem 00:19:40.356 treq: not specified, sq flow control disable supported 00:19:40.356 portid: 1 00:19:40.356 trsvcid: 4420 00:19:40.356 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:40.356 traddr: 10.0.0.1 00:19:40.356 eflags: none 00:19:40.356 sectype: none 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:40.356 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:40.357 12:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:43.647 Initializing NVMe Controllers 00:19:43.647 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:43.647 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:43.647 Initialization complete. Launching workers. 00:19:43.647 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33057, failed: 0 00:19:43.647 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33057, failed to submit 0 00:19:43.647 success 0, unsuccessful 33057, failed 0 00:19:43.647 12:54:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:43.647 12:54:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:46.937 Initializing NVMe Controllers 00:19:46.937 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:46.937 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:46.937 Initialization complete. Launching workers. 00:19:46.937 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63103, failed: 0 00:19:46.937 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25920, failed to submit 37183 00:19:46.937 success 0, unsuccessful 25920, failed 0 00:19:46.937 12:54:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:46.937 12:54:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:50.227 Initializing NVMe Controllers 00:19:50.227 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:50.227 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:50.227 Initialization complete. Launching workers. 00:19:50.227 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68728, failed: 0 00:19:50.227 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17134, failed to submit 51594 00:19:50.227 success 0, unsuccessful 17134, failed 0 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:50.227 12:54:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:50.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:51.865 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:52.125 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:52.125 00:19:52.125 real 0m12.703s 00:19:52.125 user 0m5.814s 00:19:52.125 sys 0m4.221s 00:19:52.125 12:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.125 ************************************ 00:19:52.125 12:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.125 END TEST kernel_target_abort 00:19:52.125 ************************************ 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.125 rmmod nvme_tcp 00:19:52.125 rmmod nvme_fabrics 00:19:52.125 rmmod nvme_keyring 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 83662 ']' 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 83662 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 83662 ']' 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 83662 00:19:52.125 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83662) - No such process 00:19:52.125 Process with pid 83662 is not found 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 83662 is not found' 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:52.125 12:55:00 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:52.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:52.715 Waiting for block devices as requested 00:19:52.715 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.715 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:52.998 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:19:52.999 00:19:52.999 real 0m26.485s 00:19:52.999 user 0m47.869s 00:19:52.999 sys 0m7.732s 00:19:52.999 ************************************ 00:19:52.999 END TEST nvmf_abort_qd_sizes 00:19:52.999 ************************************ 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.999 12:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:52.999 12:55:01 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:52.999 12:55:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:52.999 12:55:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.999 12:55:01 -- common/autotest_common.sh@10 -- # set +x 00:19:53.272 ************************************ 00:19:53.272 START TEST keyring_file 00:19:53.272 ************************************ 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:53.272 * Looking for test storage... 00:19:53.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@345 -- # : 1 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@353 -- # local d=1 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@355 -- # echo 1 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@353 -- # local d=2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@355 -- # echo 2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.272 12:55:01 keyring_file -- scripts/common.sh@368 -- # return 0 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:53.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.272 --rc genhtml_branch_coverage=1 00:19:53.272 --rc genhtml_function_coverage=1 00:19:53.272 --rc genhtml_legend=1 00:19:53.272 --rc geninfo_all_blocks=1 00:19:53.272 --rc geninfo_unexecuted_blocks=1 00:19:53.272 00:19:53.272 ' 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:53.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.272 --rc genhtml_branch_coverage=1 00:19:53.272 --rc genhtml_function_coverage=1 00:19:53.272 --rc genhtml_legend=1 00:19:53.272 --rc geninfo_all_blocks=1 00:19:53.272 --rc geninfo_unexecuted_blocks=1 00:19:53.272 00:19:53.272 ' 00:19:53.272 12:55:01 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:53.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.272 --rc genhtml_branch_coverage=1 00:19:53.273 --rc genhtml_function_coverage=1 00:19:53.273 --rc genhtml_legend=1 00:19:53.273 --rc geninfo_all_blocks=1 00:19:53.273 --rc geninfo_unexecuted_blocks=1 00:19:53.273 00:19:53.273 ' 00:19:53.273 12:55:01 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:53.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.273 --rc genhtml_branch_coverage=1 00:19:53.273 --rc genhtml_function_coverage=1 00:19:53.273 --rc genhtml_legend=1 00:19:53.273 --rc geninfo_all_blocks=1 00:19:53.273 --rc geninfo_unexecuted_blocks=1 00:19:53.273 00:19:53.273 ' 00:19:53.273 12:55:01 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.273 12:55:01 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.273 12:55:01 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.273 12:55:01 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.273 12:55:01 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.273 12:55:01 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.273 12:55:01 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.273 12:55:01 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.273 12:55:01 keyring_file -- paths/export.sh@5 -- # export PATH 00:19:53.273 12:55:01 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@51 -- # : 0 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.273 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:19:53.273 12:55:01 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:19:53.273 12:55:01 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:19:53.273 12:55:01 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:19:53.273 12:55:01 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:19:53.273 12:55:01 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:19:53.273 12:55:01 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5JnrBczYG9 00:19:53.273 12:55:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:53.273 12:55:01 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5JnrBczYG9 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5JnrBczYG9 00:19:53.544 12:55:01 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5JnrBczYG9 00:19:53.544 12:55:01 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@17 -- # name=key1 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Tc7m7Ljp80 00:19:53.544 12:55:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:19:53.544 12:55:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:19:53.544 12:55:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.544 12:55:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:53.544 12:55:01 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:19:53.544 12:55:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:53.544 12:55:01 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:53.544 12:55:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Tc7m7Ljp80 00:19:53.544 12:55:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Tc7m7Ljp80 00:19:53.544 12:55:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Tc7m7Ljp80 00:19:53.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.544 12:55:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=84568 00:19:53.544 12:55:02 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:53.544 12:55:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84568 00:19:53.544 12:55:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84568 ']' 00:19:53.544 12:55:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.544 12:55:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.544 12:55:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.544 12:55:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.544 12:55:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:53.544 [2024-11-15 12:55:02.077430] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:19:53.544 [2024-11-15 12:55:02.077734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84568 ] 00:19:53.803 [2024-11-15 12:55:02.229637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.803 [2024-11-15 12:55:02.269290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.803 [2024-11-15 12:55:02.317518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:53.803 12:55:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.803 12:55:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:53.803 12:55:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:19:53.803 12:55:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.803 12:55:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:54.062 [2024-11-15 12:55:02.471848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.062 null0 00:19:54.062 [2024-11-15 12:55:02.503828] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.062 [2024-11-15 12:55:02.504037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.062 12:55:02 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:54.062 [2024-11-15 12:55:02.535815] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:19:54.062 request: 00:19:54.062 { 00:19:54.062 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.062 "secure_channel": false, 00:19:54.062 "listen_address": { 00:19:54.062 "trtype": "tcp", 00:19:54.062 "traddr": "127.0.0.1", 00:19:54.062 "trsvcid": "4420" 00:19:54.062 }, 00:19:54.062 "method": "nvmf_subsystem_add_listener", 00:19:54.062 "req_id": 1 00:19:54.062 } 00:19:54.062 Got JSON-RPC error response 00:19:54.062 response: 00:19:54.062 { 00:19:54.062 "code": -32602, 00:19:54.062 "message": "Invalid parameters" 00:19:54.062 } 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:54.062 12:55:02 keyring_file -- keyring/file.sh@47 -- # bperfpid=84578 00:19:54.062 12:55:02 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84578 /var/tmp/bperf.sock 00:19:54.062 12:55:02 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84578 ']' 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:54.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.062 12:55:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:54.062 [2024-11-15 12:55:02.627645] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:19:54.062 [2024-11-15 12:55:02.628013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84578 ] 00:19:54.321 [2024-11-15 12:55:02.790597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.321 [2024-11-15 12:55:02.829265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.321 [2024-11-15 12:55:02.862180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:54.321 12:55:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.321 12:55:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:54.321 12:55:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:19:54.321 12:55:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:19:54.579 12:55:03 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Tc7m7Ljp80 00:19:54.579 12:55:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Tc7m7Ljp80 00:19:54.838 12:55:03 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:19:54.838 12:55:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:19:54.838 12:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:54.838 12:55:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:54.838 12:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:55.096 12:55:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5JnrBczYG9 == \/\t\m\p\/\t\m\p\.\5\J\n\r\B\c\z\Y\G\9 ]] 00:19:55.096 12:55:03 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:19:55.096 12:55:03 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:19:55.096 12:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:55.096 12:55:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:55.096 12:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:55.354 12:55:03 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Tc7m7Ljp80 == \/\t\m\p\/\t\m\p\.\T\c\7\m\7\L\j\p\8\0 ]] 00:19:55.354 12:55:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:19:55.354 12:55:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:55.354 12:55:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:55.354 12:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:55.354 12:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:55.354 12:55:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:55.614 12:55:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:19:55.614 12:55:04 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:19:55.614 12:55:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:55.614 12:55:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:55.614 12:55:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:55.614 12:55:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:55.614 12:55:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:55.873 12:55:04 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:19:55.873 12:55:04 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:55.873 12:55:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:56.132 [2024-11-15 12:55:04.643759] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.132 nvme0n1 00:19:56.132 12:55:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:19:56.132 12:55:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:56.132 12:55:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:56.132 12:55:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:56.132 12:55:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:56.132 12:55:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:56.390 12:55:04 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:19:56.390 12:55:04 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:19:56.390 12:55:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:56.390 12:55:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:56.390 12:55:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:56.390 12:55:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:56.390 12:55:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:56.648 12:55:05 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:19:56.648 12:55:05 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:56.648 Running I/O for 1 seconds... 00:19:58.024 14093.00 IOPS, 55.05 MiB/s 00:19:58.024 Latency(us) 00:19:58.024 [2024-11-15T12:55:06.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.024 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:19:58.024 nvme0n1 : 1.01 14145.85 55.26 0.00 0.00 9029.00 3544.90 14954.12 00:19:58.024 [2024-11-15T12:55:06.694Z] =================================================================================================================== 00:19:58.024 [2024-11-15T12:55:06.694Z] Total : 14145.85 55.26 0.00 0.00 9029.00 3544.90 14954.12 00:19:58.024 { 00:19:58.024 "results": [ 00:19:58.024 { 00:19:58.024 "job": "nvme0n1", 00:19:58.024 "core_mask": "0x2", 00:19:58.024 "workload": "randrw", 00:19:58.024 "percentage": 50, 00:19:58.024 "status": "finished", 00:19:58.024 "queue_depth": 128, 00:19:58.024 "io_size": 4096, 00:19:58.024 "runtime": 1.005383, 00:19:58.024 "iops": 14145.85287397937, 00:19:58.024 "mibps": 55.25723778898191, 00:19:58.024 "io_failed": 0, 00:19:58.024 "io_timeout": 0, 00:19:58.024 "avg_latency_us": 9029.004438194346, 00:19:58.024 "min_latency_us": 3544.9018181818183, 00:19:58.024 "max_latency_us": 14954.123636363636 00:19:58.024 } 00:19:58.024 ], 00:19:58.024 "core_count": 1 00:19:58.024 } 00:19:58.024 12:55:06 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:58.024 12:55:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:58.024 12:55:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:19:58.024 12:55:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:58.024 12:55:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:58.024 12:55:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:58.024 12:55:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:58.024 12:55:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:58.282 12:55:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:19:58.282 12:55:06 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:19:58.282 12:55:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:58.282 12:55:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:58.282 12:55:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:58.282 12:55:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:58.282 12:55:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:58.541 12:55:07 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:19:58.542 12:55:07 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:58.542 12:55:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:58.542 12:55:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:58.542 12:55:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:58.542 12:55:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.542 12:55:07 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:58.542 12:55:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.542 12:55:07 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:58.542 12:55:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:58.801 [2024-11-15 12:55:07.347502] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.802 [2024-11-15 12:55:07.348023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229e770 (107): Transport endpoint is not connected 00:19:58.802 [2024-11-15 12:55:07.349016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229e770 (9): Bad file descriptor 00:19:58.802 [2024-11-15 12:55:07.350019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:19:58.802 [2024-11-15 12:55:07.350072] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:19:58.802 [2024-11-15 12:55:07.350099] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:19:58.802 [2024-11-15 12:55:07.350110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:19:58.802 request: 00:19:58.802 { 00:19:58.802 "name": "nvme0", 00:19:58.802 "trtype": "tcp", 00:19:58.802 "traddr": "127.0.0.1", 00:19:58.802 "adrfam": "ipv4", 00:19:58.802 "trsvcid": "4420", 00:19:58.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:58.802 "prchk_reftag": false, 00:19:58.802 "prchk_guard": false, 00:19:58.802 "hdgst": false, 00:19:58.802 "ddgst": false, 00:19:58.802 "psk": "key1", 00:19:58.802 "allow_unrecognized_csi": false, 00:19:58.802 "method": "bdev_nvme_attach_controller", 00:19:58.802 "req_id": 1 00:19:58.802 } 00:19:58.802 Got JSON-RPC error response 00:19:58.802 response: 00:19:58.802 { 00:19:58.802 "code": -5, 00:19:58.802 "message": "Input/output error" 00:19:58.802 } 00:19:58.802 12:55:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:58.802 12:55:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.802 12:55:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.802 12:55:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.802 12:55:07 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:19:58.802 12:55:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:58.802 12:55:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:58.802 12:55:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:58.802 12:55:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:58.802 12:55:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:59.061 12:55:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:19:59.061 12:55:07 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:19:59.061 12:55:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:59.061 12:55:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:59.061 12:55:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:59.061 12:55:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:59.061 12:55:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:59.321 12:55:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:19:59.321 12:55:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:19:59.321 12:55:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:59.595 12:55:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:19:59.595 12:55:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:19:59.858 12:55:08 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:19:59.858 12:55:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:59.858 12:55:08 keyring_file -- keyring/file.sh@78 -- # jq length 00:20:00.121 12:55:08 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:20:00.121 12:55:08 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.5JnrBczYG9 00:20:00.121 12:55:08 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:20:00.121 12:55:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:00.121 12:55:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:20:00.121 12:55:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:00.121 12:55:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.121 12:55:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:00.121 12:55:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.121 12:55:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:20:00.121 12:55:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:20:00.391 [2024-11-15 12:55:08.838540] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5JnrBczYG9': 0100660 00:20:00.391 [2024-11-15 12:55:08.838572] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:00.391 request: 00:20:00.391 { 00:20:00.391 "name": "key0", 00:20:00.391 "path": "/tmp/tmp.5JnrBczYG9", 00:20:00.391 "method": "keyring_file_add_key", 00:20:00.391 "req_id": 1 00:20:00.391 } 00:20:00.391 Got JSON-RPC error response 00:20:00.391 response: 00:20:00.391 { 00:20:00.391 "code": -1, 00:20:00.391 "message": "Operation not permitted" 00:20:00.391 } 00:20:00.391 12:55:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:00.391 12:55:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.391 12:55:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.391 12:55:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.391 12:55:08 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.5JnrBczYG9 00:20:00.391 12:55:08 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:20:00.391 12:55:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5JnrBczYG9 00:20:00.655 12:55:09 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.5JnrBczYG9 00:20:00.655 12:55:09 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:20:00.655 12:55:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:00.655 12:55:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:00.655 12:55:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:00.655 12:55:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:00.655 12:55:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:00.914 12:55:09 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:20:00.914 12:55:09 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:00.914 12:55:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:00.914 12:55:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:00.914 12:55:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:00.914 12:55:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.914 12:55:09 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:00.914 12:55:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.914 12:55:09 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:00.914 12:55:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:01.176 [2024-11-15 12:55:09.622755] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5JnrBczYG9': No such file or directory 00:20:01.176 [2024-11-15 12:55:09.622975] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:01.176 [2024-11-15 12:55:09.623113] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:01.176 [2024-11-15 12:55:09.623149] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:20:01.176 [2024-11-15 12:55:09.623247] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:01.176 [2024-11-15 12:55:09.623260] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:01.176 request: 00:20:01.176 { 00:20:01.176 "name": "nvme0", 00:20:01.176 "trtype": "tcp", 00:20:01.176 "traddr": "127.0.0.1", 00:20:01.176 "adrfam": "ipv4", 00:20:01.176 "trsvcid": "4420", 00:20:01.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.176 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.176 "prchk_reftag": false, 00:20:01.176 "prchk_guard": false, 00:20:01.176 "hdgst": false, 00:20:01.176 "ddgst": false, 00:20:01.176 "psk": "key0", 00:20:01.176 "allow_unrecognized_csi": false, 00:20:01.176 "method": "bdev_nvme_attach_controller", 00:20:01.176 "req_id": 1 00:20:01.176 } 00:20:01.176 Got JSON-RPC error response 00:20:01.176 response: 00:20:01.176 { 00:20:01.176 "code": -19, 00:20:01.176 "message": "No such device" 00:20:01.176 } 00:20:01.176 12:55:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:01.176 12:55:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:01.176 12:55:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:01.176 12:55:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:01.176 12:55:09 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:20:01.176 12:55:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:01.436 12:55:09 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PHGS0GLiUn 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:01.436 12:55:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:01.436 12:55:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.436 12:55:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:01.436 12:55:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:01.436 12:55:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:01.436 12:55:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PHGS0GLiUn 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PHGS0GLiUn 00:20:01.436 12:55:09 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.PHGS0GLiUn 00:20:01.436 12:55:09 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PHGS0GLiUn 00:20:01.436 12:55:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PHGS0GLiUn 00:20:01.694 12:55:10 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:01.694 12:55:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:01.953 nvme0n1 00:20:01.953 12:55:10 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:20:01.953 12:55:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:01.953 12:55:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:01.953 12:55:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:01.953 12:55:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:01.953 12:55:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:02.213 12:55:10 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:20:02.213 12:55:10 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:20:02.213 12:55:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:02.472 12:55:11 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:20:02.472 12:55:11 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:20:02.472 12:55:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:02.472 12:55:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:02.472 12:55:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:02.731 12:55:11 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:20:02.731 12:55:11 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:20:02.731 12:55:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:02.731 12:55:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:02.731 12:55:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:02.731 12:55:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:02.731 12:55:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:02.990 12:55:11 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:20:02.990 12:55:11 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:02.990 12:55:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:03.249 12:55:11 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:20:03.249 12:55:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:03.250 12:55:11 keyring_file -- keyring/file.sh@105 -- # jq length 00:20:03.508 12:55:12 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:20:03.508 12:55:12 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PHGS0GLiUn 00:20:03.508 12:55:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PHGS0GLiUn 00:20:03.768 12:55:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Tc7m7Ljp80 00:20:03.768 12:55:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Tc7m7Ljp80 00:20:04.027 12:55:12 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:04.027 12:55:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:04.285 nvme0n1 00:20:04.544 12:55:12 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:20:04.544 12:55:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:04.804 12:55:13 keyring_file -- keyring/file.sh@113 -- # config='{ 00:20:04.804 "subsystems": [ 00:20:04.804 { 00:20:04.804 "subsystem": "keyring", 00:20:04.804 "config": [ 00:20:04.804 { 00:20:04.804 "method": "keyring_file_add_key", 00:20:04.804 "params": { 00:20:04.804 "name": "key0", 00:20:04.804 "path": "/tmp/tmp.PHGS0GLiUn" 00:20:04.804 } 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "method": "keyring_file_add_key", 00:20:04.804 "params": { 00:20:04.804 "name": "key1", 00:20:04.804 "path": "/tmp/tmp.Tc7m7Ljp80" 00:20:04.804 } 00:20:04.804 } 00:20:04.804 ] 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "subsystem": "iobuf", 00:20:04.804 "config": [ 00:20:04.804 { 00:20:04.804 "method": "iobuf_set_options", 00:20:04.804 "params": { 00:20:04.804 "small_pool_count": 8192, 00:20:04.804 "large_pool_count": 1024, 00:20:04.804 "small_bufsize": 8192, 00:20:04.804 "large_bufsize": 135168, 00:20:04.804 "enable_numa": false 00:20:04.804 } 00:20:04.804 } 00:20:04.804 ] 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "subsystem": "sock", 00:20:04.804 "config": [ 00:20:04.804 { 00:20:04.804 "method": "sock_set_default_impl", 00:20:04.804 "params": { 00:20:04.804 "impl_name": "uring" 00:20:04.804 } 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "method": "sock_impl_set_options", 00:20:04.804 "params": { 00:20:04.804 "impl_name": "ssl", 00:20:04.804 "recv_buf_size": 4096, 00:20:04.804 "send_buf_size": 4096, 00:20:04.804 "enable_recv_pipe": true, 00:20:04.804 "enable_quickack": false, 00:20:04.804 "enable_placement_id": 0, 00:20:04.804 "enable_zerocopy_send_server": true, 00:20:04.804 "enable_zerocopy_send_client": false, 00:20:04.804 "zerocopy_threshold": 0, 00:20:04.804 "tls_version": 0, 00:20:04.804 "enable_ktls": false 00:20:04.804 } 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "method": "sock_impl_set_options", 00:20:04.804 "params": { 00:20:04.804 "impl_name": "posix", 00:20:04.804 "recv_buf_size": 2097152, 00:20:04.804 "send_buf_size": 2097152, 00:20:04.804 "enable_recv_pipe": true, 00:20:04.804 "enable_quickack": false, 00:20:04.804 "enable_placement_id": 0, 00:20:04.804 "enable_zerocopy_send_server": true, 00:20:04.804 "enable_zerocopy_send_client": false, 00:20:04.804 "zerocopy_threshold": 0, 00:20:04.804 "tls_version": 0, 00:20:04.804 "enable_ktls": false 00:20:04.804 } 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "method": "sock_impl_set_options", 00:20:04.804 "params": { 00:20:04.804 "impl_name": "uring", 00:20:04.804 "recv_buf_size": 2097152, 00:20:04.804 "send_buf_size": 2097152, 00:20:04.804 "enable_recv_pipe": true, 00:20:04.804 "enable_quickack": false, 00:20:04.804 "enable_placement_id": 0, 00:20:04.804 "enable_zerocopy_send_server": false, 00:20:04.804 "enable_zerocopy_send_client": false, 00:20:04.804 "zerocopy_threshold": 0, 00:20:04.804 "tls_version": 0, 00:20:04.804 "enable_ktls": false 00:20:04.804 } 00:20:04.804 } 00:20:04.804 ] 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "subsystem": "vmd", 00:20:04.804 "config": [] 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "subsystem": "accel", 00:20:04.804 "config": [ 00:20:04.804 { 00:20:04.804 "method": "accel_set_options", 00:20:04.804 "params": { 00:20:04.804 "small_cache_size": 128, 00:20:04.804 "large_cache_size": 16, 00:20:04.804 "task_count": 2048, 00:20:04.804 "sequence_count": 2048, 00:20:04.804 "buf_count": 2048 00:20:04.804 } 00:20:04.804 } 00:20:04.804 ] 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "subsystem": "bdev", 00:20:04.804 "config": [ 00:20:04.804 { 00:20:04.804 "method": "bdev_set_options", 00:20:04.804 "params": { 00:20:04.804 "bdev_io_pool_size": 65535, 00:20:04.804 "bdev_io_cache_size": 256, 00:20:04.804 "bdev_auto_examine": true, 00:20:04.804 "iobuf_small_cache_size": 128, 00:20:04.804 "iobuf_large_cache_size": 16 00:20:04.804 } 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "method": "bdev_raid_set_options", 00:20:04.804 "params": { 00:20:04.804 "process_window_size_kb": 1024, 00:20:04.804 "process_max_bandwidth_mb_sec": 0 00:20:04.804 } 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "method": "bdev_iscsi_set_options", 00:20:04.804 "params": { 00:20:04.804 "timeout_sec": 30 00:20:04.804 } 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "method": "bdev_nvme_set_options", 00:20:04.804 "params": { 00:20:04.804 "action_on_timeout": "none", 00:20:04.804 "timeout_us": 0, 00:20:04.804 "timeout_admin_us": 0, 00:20:04.804 "keep_alive_timeout_ms": 10000, 00:20:04.804 "arbitration_burst": 0, 00:20:04.804 "low_priority_weight": 0, 00:20:04.804 "medium_priority_weight": 0, 00:20:04.804 "high_priority_weight": 0, 00:20:04.804 "nvme_adminq_poll_period_us": 10000, 00:20:04.804 "nvme_ioq_poll_period_us": 0, 00:20:04.804 "io_queue_requests": 512, 00:20:04.804 "delay_cmd_submit": true, 00:20:04.804 "transport_retry_count": 4, 00:20:04.804 "bdev_retry_count": 3, 00:20:04.804 "transport_ack_timeout": 0, 00:20:04.804 "ctrlr_loss_timeout_sec": 0, 00:20:04.804 "reconnect_delay_sec": 0, 00:20:04.804 "fast_io_fail_timeout_sec": 0, 00:20:04.804 "disable_auto_failback": false, 00:20:04.804 "generate_uuids": false, 00:20:04.804 "transport_tos": 0, 00:20:04.804 "nvme_error_stat": false, 00:20:04.804 "rdma_srq_size": 0, 00:20:04.804 "io_path_stat": false, 00:20:04.805 "allow_accel_sequence": false, 00:20:04.805 "rdma_max_cq_size": 0, 00:20:04.805 "rdma_cm_event_timeout_ms": 0, 00:20:04.805 "dhchap_digests": [ 00:20:04.805 "sha256", 00:20:04.805 "sha384", 00:20:04.805 "sha512" 00:20:04.805 ], 00:20:04.805 "dhchap_dhgroups": [ 00:20:04.805 "null", 00:20:04.805 "ffdhe2048", 00:20:04.805 "ffdhe3072", 00:20:04.805 "ffdhe4096", 00:20:04.805 "ffdhe6144", 00:20:04.805 "ffdhe8192" 00:20:04.805 ] 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "bdev_nvme_attach_controller", 00:20:04.805 "params": { 00:20:04.805 "name": "nvme0", 00:20:04.805 "trtype": "TCP", 00:20:04.805 "adrfam": "IPv4", 00:20:04.805 "traddr": "127.0.0.1", 00:20:04.805 "trsvcid": "4420", 00:20:04.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:04.805 "prchk_reftag": false, 00:20:04.805 "prchk_guard": false, 00:20:04.805 "ctrlr_loss_timeout_sec": 0, 00:20:04.805 "reconnect_delay_sec": 0, 00:20:04.805 "fast_io_fail_timeout_sec": 0, 00:20:04.805 "psk": "key0", 00:20:04.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:04.805 "hdgst": false, 00:20:04.805 "ddgst": false, 00:20:04.805 "multipath": "multipath" 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "bdev_nvme_set_hotplug", 00:20:04.805 "params": { 00:20:04.805 "period_us": 100000, 00:20:04.805 "enable": false 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "bdev_wait_for_examine" 00:20:04.805 } 00:20:04.805 ] 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "subsystem": "nbd", 00:20:04.805 "config": [] 00:20:04.805 } 00:20:04.805 ] 00:20:04.805 }' 00:20:04.805 12:55:13 keyring_file -- keyring/file.sh@115 -- # killprocess 84578 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84578 ']' 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84578 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84578 00:20:04.805 killing process with pid 84578 00:20:04.805 Received shutdown signal, test time was about 1.000000 seconds 00:20:04.805 00:20:04.805 Latency(us) 00:20:04.805 [2024-11-15T12:55:13.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.805 [2024-11-15T12:55:13.475Z] =================================================================================================================== 00:20:04.805 [2024-11-15T12:55:13.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84578' 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@973 -- # kill 84578 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@978 -- # wait 84578 00:20:04.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:04.805 12:55:13 keyring_file -- keyring/file.sh@118 -- # bperfpid=84815 00:20:04.805 12:55:13 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:04.805 12:55:13 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84815 /var/tmp/bperf.sock 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84815 ']' 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:04.805 12:55:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.805 12:55:13 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:20:04.805 "subsystems": [ 00:20:04.805 { 00:20:04.805 "subsystem": "keyring", 00:20:04.805 "config": [ 00:20:04.805 { 00:20:04.805 "method": "keyring_file_add_key", 00:20:04.805 "params": { 00:20:04.805 "name": "key0", 00:20:04.805 "path": "/tmp/tmp.PHGS0GLiUn" 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "keyring_file_add_key", 00:20:04.805 "params": { 00:20:04.805 "name": "key1", 00:20:04.805 "path": "/tmp/tmp.Tc7m7Ljp80" 00:20:04.805 } 00:20:04.805 } 00:20:04.805 ] 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "subsystem": "iobuf", 00:20:04.805 "config": [ 00:20:04.805 { 00:20:04.805 "method": "iobuf_set_options", 00:20:04.805 "params": { 00:20:04.805 "small_pool_count": 8192, 00:20:04.805 "large_pool_count": 1024, 00:20:04.805 "small_bufsize": 8192, 00:20:04.805 "large_bufsize": 135168, 00:20:04.805 "enable_numa": false 00:20:04.805 } 00:20:04.805 } 00:20:04.805 ] 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "subsystem": "sock", 00:20:04.805 "config": [ 00:20:04.805 { 00:20:04.805 "method": "sock_set_default_impl", 00:20:04.805 "params": { 00:20:04.805 "impl_name": "uring" 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "sock_impl_set_options", 00:20:04.805 "params": { 00:20:04.805 "impl_name": "ssl", 00:20:04.805 "recv_buf_size": 4096, 00:20:04.805 "send_buf_size": 4096, 00:20:04.805 "enable_recv_pipe": true, 00:20:04.805 "enable_quickack": false, 00:20:04.805 "enable_placement_id": 0, 00:20:04.805 "enable_zerocopy_send_server": true, 00:20:04.805 "enable_zerocopy_send_client": false, 00:20:04.805 "zerocopy_threshold": 0, 00:20:04.805 "tls_version": 0, 00:20:04.805 "enable_ktls": false 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "sock_impl_set_options", 00:20:04.805 "params": { 00:20:04.805 "impl_name": "posix", 00:20:04.805 "recv_buf_size": 2097152, 00:20:04.805 "send_buf_size": 2097152, 00:20:04.805 "enable_recv_pipe": true, 00:20:04.805 "enable_quickack": false, 00:20:04.805 "enable_placement_id": 0, 00:20:04.805 "enable_zerocopy_send_server": true, 00:20:04.805 "enable_zerocopy_send_client": false, 00:20:04.805 "zerocopy_threshold": 0, 00:20:04.805 "tls_version": 0, 00:20:04.805 "enable_ktls": false 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "sock_impl_set_options", 00:20:04.805 "params": { 00:20:04.805 "impl_name": "uring", 00:20:04.805 "recv_buf_size": 2097152, 00:20:04.805 "send_buf_size": 2097152, 00:20:04.805 "enable_recv_pipe": true, 00:20:04.805 "enable_quickack": false, 00:20:04.805 "enable_placement_id": 0, 00:20:04.805 "enable_zerocopy_send_server": false, 00:20:04.805 "enable_zerocopy_send_client": false, 00:20:04.805 "zerocopy_threshold": 0, 00:20:04.805 "tls_version": 0, 00:20:04.805 "enable_ktls": false 00:20:04.805 } 00:20:04.805 } 00:20:04.805 ] 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "subsystem": "vmd", 00:20:04.805 "config": [] 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "subsystem": "accel", 00:20:04.805 "config": [ 00:20:04.805 { 00:20:04.805 "method": "accel_set_options", 00:20:04.805 "params": { 00:20:04.805 "small_cache_size": 128, 00:20:04.805 "large_cache_size": 16, 00:20:04.805 "task_count": 2048, 00:20:04.805 "sequence_count": 2048, 00:20:04.805 "buf_count": 2048 00:20:04.805 } 00:20:04.805 } 00:20:04.805 ] 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "subsystem": "bdev", 00:20:04.805 "config": [ 00:20:04.805 { 00:20:04.805 "method": "bdev_set_options", 00:20:04.805 "params": { 00:20:04.805 "bdev_io_pool_size": 65535, 00:20:04.805 "bdev_io_cache_size": 256, 00:20:04.805 "bdev_auto_examine": true, 00:20:04.805 "iobuf_small_cache_size": 128, 00:20:04.805 "iobuf_large_cache_size": 16 00:20:04.805 } 00:20:04.805 }, 00:20:04.805 { 00:20:04.805 "method": "bdev_raid_set_options", 00:20:04.805 "params": { 00:20:04.805 "process_window_size_kb": 1024, 00:20:04.806 "process_max_bandwidth_mb_sec": 0 00:20:04.806 } 00:20:04.806 }, 00:20:04.806 { 00:20:04.806 "method": "bdev_iscsi_set_options", 00:20:04.806 "params": { 00:20:04.806 "timeout_sec": 30 00:20:04.806 } 00:20:04.806 }, 00:20:04.806 { 00:20:04.806 "method": "bdev_nvme_set_options", 00:20:04.806 "params": { 00:20:04.806 "action_on_timeout": "none", 00:20:04.806 "timeout_us": 0, 00:20:04.806 "timeout_admin_us": 0, 00:20:04.806 "keep_alive_timeout_ms": 10000, 00:20:04.806 "arbitration_burst": 0, 00:20:04.806 "low_priority_weight": 0, 00:20:04.806 "medium_priority_weight": 0, 00:20:04.806 "high_priority_weight": 0, 00:20:04.806 "nvme_adminq_poll_period_us": 10000, 00:20:04.806 "nvme_ioq_poll_period_us": 0, 00:20:04.806 "io_queue_requests": 512, 00:20:04.806 "delay_cmd_submit": true, 00:20:04.806 "transport_retry_count": 4, 00:20:04.806 "bdev_retry_count": 3, 00:20:04.806 "transport_ack_timeout": 0, 00:20:04.806 "ctrlr_loss_timeout_sec": 0, 00:20:04.806 "reconnect_delay_sec": 0, 00:20:04.806 "fast_io_fail_timeout_sec": 0, 00:20:04.806 "disable_auto_failback": false, 00:20:04.806 "generate_uuids": false, 00:20:04.806 "transport_tos": 0, 00:20:04.806 "nvme_error_stat": false, 00:20:04.806 "rdma_srq_size": 0, 00:20:04.806 "io_path_stat": false, 00:20:04.806 "allow_accel_sequence": false, 00:20:04.806 "rdma_max_cq_size": 0, 00:20:04.806 "rdma_cm_event_timeout_ms": 0, 00:20:04.806 "dhchap_digests": [ 00:20:04.806 "sha256", 00:20:04.806 "sha384", 00:20:04.806 "sha512" 00:20:04.806 ], 00:20:04.806 "dhchap_dhgroups": [ 00:20:04.806 "null", 00:20:04.806 "ffdhe2048", 00:20:04.806 "ffdhe3072", 00:20:04.806 "ffdhe4096", 00:20:04.806 "ffdhe6144", 00:20:04.806 "ffdhe8192" 00:20:04.806 ] 00:20:04.806 } 00:20:04.806 }, 00:20:04.806 { 00:20:04.806 "method": "bdev_nvme_attach_controller", 00:20:04.806 "params": { 00:20:04.806 "name": "nvme0", 00:20:04.806 "trtype": "TCP", 00:20:04.806 "adrfam": "IPv4", 00:20:04.806 "traddr": "127.0.0.1", 00:20:04.806 "trsvcid": "4420", 00:20:04.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:04.806 "prchk_reftag": false, 00:20:04.806 "prchk_guard": false, 00:20:04.806 "ctrlr_loss_timeout_sec": 0, 00:20:04.806 "reconnect_delay_sec": 0, 00:20:04.806 "fast_io_fail_timeout_sec": 0, 00:20:04.806 "psk": "key0", 00:20:04.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:04.806 "hdgst": false, 00:20:04.806 "ddgst": false, 00:20:04.806 "multipath": "multipath" 00:20:04.806 } 00:20:04.806 }, 00:20:04.806 { 00:20:04.806 "method": "bdev_nvme_set_hotplug", 00:20:04.806 "params": { 00:20:04.806 "period_us": 100000, 00:20:04.806 "enable": false 00:20:04.806 } 00:20:04.806 }, 00:20:04.806 { 00:20:04.806 "method": "bdev_wait_for_examine" 00:20:04.806 } 00:20:04.806 ] 00:20:04.806 }, 00:20:04.806 { 00:20:04.806 "subsystem": "nbd", 00:20:04.806 "config": [] 00:20:04.806 } 00:20:04.806 ] 00:20:04.806 }' 00:20:04.806 12:55:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:04.806 [2024-11-15 12:55:13.448203] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:20:04.806 [2024-11-15 12:55:13.448435] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84815 ] 00:20:05.066 [2024-11-15 12:55:13.585412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.066 [2024-11-15 12:55:13.614426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.066 [2024-11-15 12:55:13.722237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.325 [2024-11-15 12:55:13.763305] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.892 12:55:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.892 12:55:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:05.892 12:55:14 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:20:05.892 12:55:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:05.892 12:55:14 keyring_file -- keyring/file.sh@121 -- # jq length 00:20:06.151 12:55:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:06.151 12:55:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:20:06.151 12:55:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:06.151 12:55:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:06.151 12:55:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:06.151 12:55:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:06.151 12:55:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:06.410 12:55:14 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:20:06.410 12:55:14 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:20:06.410 12:55:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:06.410 12:55:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:06.410 12:55:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:06.410 12:55:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:06.410 12:55:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:06.668 12:55:15 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:20:06.668 12:55:15 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:20:06.668 12:55:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:06.668 12:55:15 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:20:06.926 12:55:15 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:20:06.926 12:55:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:06.926 12:55:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PHGS0GLiUn /tmp/tmp.Tc7m7Ljp80 00:20:06.926 12:55:15 keyring_file -- keyring/file.sh@20 -- # killprocess 84815 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84815 ']' 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84815 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84815 00:20:06.926 killing process with pid 84815 00:20:06.926 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.926 00:20:06.926 Latency(us) 00:20:06.926 [2024-11-15T12:55:15.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.926 [2024-11-15T12:55:15.596Z] =================================================================================================================== 00:20:06.926 [2024-11-15T12:55:15.596Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84815' 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@973 -- # kill 84815 00:20:06.926 12:55:15 keyring_file -- common/autotest_common.sh@978 -- # wait 84815 00:20:07.186 12:55:15 keyring_file -- keyring/file.sh@21 -- # killprocess 84568 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84568 ']' 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84568 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84568 00:20:07.186 killing process with pid 84568 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84568' 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@973 -- # kill 84568 00:20:07.186 12:55:15 keyring_file -- common/autotest_common.sh@978 -- # wait 84568 00:20:07.445 00:20:07.445 real 0m14.273s 00:20:07.445 user 0m36.879s 00:20:07.445 sys 0m2.622s 00:20:07.445 12:55:15 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.445 ************************************ 00:20:07.445 END TEST keyring_file 00:20:07.445 ************************************ 00:20:07.445 12:55:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:07.445 12:55:15 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:20:07.445 12:55:15 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:07.445 12:55:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.445 12:55:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.445 12:55:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.445 ************************************ 00:20:07.445 START TEST keyring_linux 00:20:07.445 ************************************ 00:20:07.445 12:55:15 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:07.445 Joined session keyring: 798806624 00:20:07.445 * Looking for test storage... 00:20:07.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:07.445 12:55:16 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:07.445 12:55:16 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:20:07.445 12:55:16 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:07.704 12:55:16 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@345 -- # : 1 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@368 -- # return 0 00:20:07.704 12:55:16 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.704 12:55:16 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:07.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.704 --rc genhtml_branch_coverage=1 00:20:07.704 --rc genhtml_function_coverage=1 00:20:07.704 --rc genhtml_legend=1 00:20:07.704 --rc geninfo_all_blocks=1 00:20:07.704 --rc geninfo_unexecuted_blocks=1 00:20:07.704 00:20:07.704 ' 00:20:07.704 12:55:16 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:07.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.704 --rc genhtml_branch_coverage=1 00:20:07.704 --rc genhtml_function_coverage=1 00:20:07.704 --rc genhtml_legend=1 00:20:07.704 --rc geninfo_all_blocks=1 00:20:07.704 --rc geninfo_unexecuted_blocks=1 00:20:07.704 00:20:07.704 ' 00:20:07.704 12:55:16 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:07.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.704 --rc genhtml_branch_coverage=1 00:20:07.704 --rc genhtml_function_coverage=1 00:20:07.704 --rc genhtml_legend=1 00:20:07.704 --rc geninfo_all_blocks=1 00:20:07.704 --rc geninfo_unexecuted_blocks=1 00:20:07.704 00:20:07.704 ' 00:20:07.704 12:55:16 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:07.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.704 --rc genhtml_branch_coverage=1 00:20:07.704 --rc genhtml_function_coverage=1 00:20:07.704 --rc genhtml_legend=1 00:20:07.704 --rc geninfo_all_blocks=1 00:20:07.704 --rc geninfo_unexecuted_blocks=1 00:20:07.704 00:20:07.704 ' 00:20:07.704 12:55:16 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:07.704 12:55:16 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=85bcfa6f-4742-42db-8cde-87c16c4a32fc 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.704 12:55:16 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.704 12:55:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.704 12:55:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.704 12:55:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.704 12:55:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:07.704 12:55:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.704 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.704 12:55:16 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.704 12:55:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:07.704 12:55:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:07.704 12:55:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:07.704 12:55:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:07.704 12:55:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:07.704 12:55:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:07.704 12:55:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:07.704 12:55:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:07.704 12:55:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:07.704 12:55:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:07.704 12:55:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:07.705 /tmp/:spdk-test:key0 00:20:07.705 12:55:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:07.705 12:55:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:07.705 12:55:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:07.705 /tmp/:spdk-test:key1 00:20:07.705 12:55:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84941 00:20:07.705 12:55:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84941 00:20:07.705 12:55:16 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.705 12:55:16 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84941 ']' 00:20:07.705 12:55:16 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.705 12:55:16 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.705 12:55:16 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.705 12:55:16 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.705 12:55:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:07.705 [2024-11-15 12:55:16.362315] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:20:07.705 [2024-11-15 12:55:16.362402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84941 ] 00:20:07.964 [2024-11-15 12:55:16.501728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.964 [2024-11-15 12:55:16.529308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.964 [2024-11-15 12:55:16.564261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:08.901 12:55:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.901 12:55:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:08.901 12:55:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:08.901 12:55:17 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.901 12:55:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:08.901 [2024-11-15 12:55:17.316039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.901 null0 00:20:08.901 [2024-11-15 12:55:17.348026] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.901 [2024-11-15 12:55:17.348174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:08.901 12:55:17 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.902 12:55:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:08.902 558559038 00:20:08.902 12:55:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:08.902 533609224 00:20:08.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:08.902 12:55:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84955 00:20:08.902 12:55:17 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:08.902 12:55:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84955 /var/tmp/bperf.sock 00:20:08.902 12:55:17 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84955 ']' 00:20:08.902 12:55:17 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:08.902 12:55:17 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.902 12:55:17 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:08.902 12:55:17 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.902 12:55:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 [2024-11-15 12:55:17.413140] Starting SPDK v25.01-pre git sha1 d2671b4b7 / DPDK 24.03.0 initialization... 00:20:08.902 [2024-11-15 12:55:17.413506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84955 ] 00:20:08.902 [2024-11-15 12:55:17.556569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.160 [2024-11-15 12:55:17.586966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.160 12:55:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.160 12:55:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:09.160 12:55:17 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:09.160 12:55:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:09.420 12:55:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:09.420 12:55:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:09.679 [2024-11-15 12:55:18.160580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:09.679 12:55:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:09.679 12:55:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:09.938 [2024-11-15 12:55:18.400892] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.938 nvme0n1 00:20:09.938 12:55:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:09.938 12:55:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:09.938 12:55:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:09.938 12:55:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:09.938 12:55:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:09.938 12:55:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:10.197 12:55:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:10.197 12:55:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:10.197 12:55:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:10.197 12:55:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:10.197 12:55:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:10.197 12:55:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:10.197 12:55:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:10.455 12:55:18 keyring_linux -- keyring/linux.sh@25 -- # sn=558559038 00:20:10.455 12:55:18 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:10.455 12:55:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:10.455 12:55:18 keyring_linux -- keyring/linux.sh@26 -- # [[ 558559038 == \5\5\8\5\5\9\0\3\8 ]] 00:20:10.455 12:55:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 558559038 00:20:10.455 12:55:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:10.455 12:55:19 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:10.455 Running I/O for 1 seconds... 00:20:11.832 14490.00 IOPS, 56.60 MiB/s 00:20:11.832 Latency(us) 00:20:11.832 [2024-11-15T12:55:20.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.832 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:11.832 nvme0n1 : 1.01 14494.66 56.62 0.00 0.00 8789.00 5868.45 14477.50 00:20:11.832 [2024-11-15T12:55:20.502Z] =================================================================================================================== 00:20:11.832 [2024-11-15T12:55:20.502Z] Total : 14494.66 56.62 0.00 0.00 8789.00 5868.45 14477.50 00:20:11.832 { 00:20:11.832 "results": [ 00:20:11.832 { 00:20:11.832 "job": "nvme0n1", 00:20:11.832 "core_mask": "0x2", 00:20:11.832 "workload": "randread", 00:20:11.832 "status": "finished", 00:20:11.832 "queue_depth": 128, 00:20:11.832 "io_size": 4096, 00:20:11.832 "runtime": 1.008578, 00:20:11.832 "iops": 14494.664765640337, 00:20:11.832 "mibps": 56.619784240782565, 00:20:11.832 "io_failed": 0, 00:20:11.832 "io_timeout": 0, 00:20:11.832 "avg_latency_us": 8788.999342076624, 00:20:11.832 "min_latency_us": 5868.450909090909, 00:20:11.832 "max_latency_us": 14477.498181818182 00:20:11.832 } 00:20:11.832 ], 00:20:11.832 "core_count": 1 00:20:11.832 } 00:20:11.832 12:55:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:11.832 12:55:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:11.832 12:55:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:11.832 12:55:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:11.832 12:55:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:11.832 12:55:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:11.832 12:55:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:11.833 12:55:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:12.092 12:55:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:12.092 12:55:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:12.092 12:55:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:12.092 12:55:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:12.092 12:55:20 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:20:12.092 12:55:20 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:12.092 12:55:20 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:12.092 12:55:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.092 12:55:20 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:12.092 12:55:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.092 12:55:20 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:12.092 12:55:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:12.352 [2024-11-15 12:55:20.921197] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.352 [2024-11-15 12:55:20.921271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee85d0 (107): Transport endpoint is not connected 00:20:12.352 [2024-11-15 12:55:20.922263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee85d0 (9): Bad file descriptor 00:20:12.352 [2024-11-15 12:55:20.923261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:12.352 [2024-11-15 12:55:20.923282] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:12.352 [2024-11-15 12:55:20.923311] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:12.352 [2024-11-15 12:55:20.923320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:12.352 request: 00:20:12.352 { 00:20:12.352 "name": "nvme0", 00:20:12.352 "trtype": "tcp", 00:20:12.352 "traddr": "127.0.0.1", 00:20:12.352 "adrfam": "ipv4", 00:20:12.352 "trsvcid": "4420", 00:20:12.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.352 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:12.352 "prchk_reftag": false, 00:20:12.352 "prchk_guard": false, 00:20:12.352 "hdgst": false, 00:20:12.352 "ddgst": false, 00:20:12.352 "psk": ":spdk-test:key1", 00:20:12.352 "allow_unrecognized_csi": false, 00:20:12.352 "method": "bdev_nvme_attach_controller", 00:20:12.352 "req_id": 1 00:20:12.352 } 00:20:12.352 Got JSON-RPC error response 00:20:12.352 response: 00:20:12.352 { 00:20:12.352 "code": -5, 00:20:12.352 "message": "Input/output error" 00:20:12.352 } 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@33 -- # sn=558559038 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 558559038 00:20:12.352 1 links removed 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@33 -- # sn=533609224 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 533609224 00:20:12.352 1 links removed 00:20:12.352 12:55:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84955 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84955 ']' 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84955 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84955 00:20:12.352 killing process with pid 84955 00:20:12.352 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.352 00:20:12.352 Latency(us) 00:20:12.352 [2024-11-15T12:55:21.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.352 [2024-11-15T12:55:21.022Z] =================================================================================================================== 00:20:12.352 [2024-11-15T12:55:21.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84955' 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@973 -- # kill 84955 00:20:12.352 12:55:20 keyring_linux -- common/autotest_common.sh@978 -- # wait 84955 00:20:12.611 12:55:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84941 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84941 ']' 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84941 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84941 00:20:12.611 killing process with pid 84941 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84941' 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 84941 00:20:12.611 12:55:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 84941 00:20:12.871 00:20:12.871 real 0m5.352s 00:20:12.871 user 0m10.409s 00:20:12.871 sys 0m1.322s 00:20:12.871 ************************************ 00:20:12.871 END TEST keyring_linux 00:20:12.871 ************************************ 00:20:12.871 12:55:21 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.871 12:55:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:12.871 12:55:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:12.871 12:55:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:12.871 12:55:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:12.871 12:55:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:12.871 12:55:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:12.871 12:55:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:12.871 12:55:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:12.871 12:55:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.871 12:55:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.871 12:55:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:12.871 12:55:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:12.871 12:55:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:12.871 12:55:21 -- common/autotest_common.sh@10 -- # set +x 00:20:14.781 INFO: APP EXITING 00:20:14.781 INFO: killing all VMs 00:20:14.781 INFO: killing vhost app 00:20:14.781 INFO: EXIT DONE 00:20:15.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.348 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:15.348 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:16.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.284 Cleaning 00:20:16.284 Removing: /var/run/dpdk/spdk0/config 00:20:16.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:16.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:16.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:16.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:16.284 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:16.284 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:16.284 Removing: /var/run/dpdk/spdk1/config 00:20:16.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:16.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:16.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:16.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:16.284 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:16.284 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:16.284 Removing: /var/run/dpdk/spdk2/config 00:20:16.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:16.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:16.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:16.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:16.284 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:16.284 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:16.284 Removing: /var/run/dpdk/spdk3/config 00:20:16.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:16.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:16.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:16.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:16.284 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:16.284 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:16.284 Removing: /var/run/dpdk/spdk4/config 00:20:16.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:16.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:16.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:16.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:16.284 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:16.284 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:16.284 Removing: /dev/shm/nvmf_trace.0 00:20:16.284 Removing: /dev/shm/spdk_tgt_trace.pid56762 00:20:16.284 Removing: /var/run/dpdk/spdk0 00:20:16.284 Removing: /var/run/dpdk/spdk1 00:20:16.284 Removing: /var/run/dpdk/spdk2 00:20:16.284 Removing: /var/run/dpdk/spdk3 00:20:16.284 Removing: /var/run/dpdk/spdk4 00:20:16.284 Removing: /var/run/dpdk/spdk_pid56615 00:20:16.284 Removing: /var/run/dpdk/spdk_pid56762 00:20:16.284 Removing: /var/run/dpdk/spdk_pid56961 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57042 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57062 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57166 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57176 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57310 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57506 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57660 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57732 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57809 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57895 00:20:16.284 Removing: /var/run/dpdk/spdk_pid57967 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58000 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58035 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58105 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58191 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58625 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58671 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58709 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58723 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58779 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58782 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58849 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58865 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58911 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58921 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58961 00:20:16.284 Removing: /var/run/dpdk/spdk_pid58966 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59097 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59131 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59209 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59536 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59552 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59580 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59598 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59608 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59627 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59640 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59656 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59675 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59688 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59704 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59723 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59731 00:20:16.284 Removing: /var/run/dpdk/spdk_pid59752 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59765 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59779 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59794 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59808 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59827 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59837 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59873 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59881 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59916 00:20:16.542 Removing: /var/run/dpdk/spdk_pid59977 00:20:16.542 Removing: /var/run/dpdk/spdk_pid60011 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60015 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60049 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60053 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60059 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60103 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60111 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60145 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60149 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60157 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60168 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60172 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60187 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60191 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60195 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60229 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60250 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60265 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60288 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60302 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60305 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60340 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60357 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60378 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60391 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60393 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60395 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60408 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60412 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60420 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60427 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60504 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60541 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60648 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60686 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60727 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60741 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60758 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60772 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60811 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60827 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60900 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60917 00:20:16.543 Removing: /var/run/dpdk/spdk_pid60957 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61007 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61057 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61083 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61178 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61226 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61253 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61485 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61577 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61600 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61635 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61663 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61702 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61730 00:20:16.543 Removing: /var/run/dpdk/spdk_pid61767 00:20:16.543 Removing: /var/run/dpdk/spdk_pid62160 00:20:16.543 Removing: /var/run/dpdk/spdk_pid62195 00:20:16.543 Removing: /var/run/dpdk/spdk_pid62533 00:20:16.543 Removing: /var/run/dpdk/spdk_pid62979 00:20:16.543 Removing: /var/run/dpdk/spdk_pid63241 00:20:16.543 Removing: /var/run/dpdk/spdk_pid64099 00:20:16.543 Removing: /var/run/dpdk/spdk_pid65003 00:20:16.543 Removing: /var/run/dpdk/spdk_pid65115 00:20:16.543 Removing: /var/run/dpdk/spdk_pid65187 00:20:16.543 Removing: /var/run/dpdk/spdk_pid66587 00:20:16.543 Removing: /var/run/dpdk/spdk_pid66890 00:20:16.543 Removing: /var/run/dpdk/spdk_pid70480 00:20:16.543 Removing: /var/run/dpdk/spdk_pid70833 00:20:16.543 Removing: /var/run/dpdk/spdk_pid70943 00:20:16.543 Removing: /var/run/dpdk/spdk_pid71070 00:20:16.543 Removing: /var/run/dpdk/spdk_pid71091 00:20:16.543 Removing: /var/run/dpdk/spdk_pid71111 00:20:16.543 Removing: /var/run/dpdk/spdk_pid71127 00:20:16.802 Removing: /var/run/dpdk/spdk_pid71213 00:20:16.802 Removing: /var/run/dpdk/spdk_pid71344 00:20:16.802 Removing: /var/run/dpdk/spdk_pid71482 00:20:16.802 Removing: /var/run/dpdk/spdk_pid71569 00:20:16.802 Removing: /var/run/dpdk/spdk_pid71752 00:20:16.802 Removing: /var/run/dpdk/spdk_pid71828 00:20:16.802 Removing: /var/run/dpdk/spdk_pid71908 00:20:16.802 Removing: /var/run/dpdk/spdk_pid72260 00:20:16.802 Removing: /var/run/dpdk/spdk_pid72655 00:20:16.802 Removing: /var/run/dpdk/spdk_pid72656 00:20:16.802 Removing: /var/run/dpdk/spdk_pid72657 00:20:16.802 Removing: /var/run/dpdk/spdk_pid72927 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73188 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73562 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73571 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73886 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73905 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73925 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73950 00:20:16.802 Removing: /var/run/dpdk/spdk_pid73955 00:20:16.802 Removing: /var/run/dpdk/spdk_pid74299 00:20:16.802 Removing: /var/run/dpdk/spdk_pid74348 00:20:16.802 Removing: /var/run/dpdk/spdk_pid74670 00:20:16.802 Removing: /var/run/dpdk/spdk_pid74873 00:20:16.802 Removing: /var/run/dpdk/spdk_pid75293 00:20:16.802 Removing: /var/run/dpdk/spdk_pid75836 00:20:16.802 Removing: /var/run/dpdk/spdk_pid76698 00:20:16.802 Removing: /var/run/dpdk/spdk_pid77326 00:20:16.802 Removing: /var/run/dpdk/spdk_pid77334 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79345 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79392 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79445 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79493 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79601 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79648 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79701 00:20:16.802 Removing: /var/run/dpdk/spdk_pid79748 00:20:16.802 Removing: /var/run/dpdk/spdk_pid80114 00:20:16.802 Removing: /var/run/dpdk/spdk_pid81333 00:20:16.802 Removing: /var/run/dpdk/spdk_pid81479 00:20:16.802 Removing: /var/run/dpdk/spdk_pid81726 00:20:16.802 Removing: /var/run/dpdk/spdk_pid82327 00:20:16.802 Removing: /var/run/dpdk/spdk_pid82491 00:20:16.802 Removing: /var/run/dpdk/spdk_pid82643 00:20:16.802 Removing: /var/run/dpdk/spdk_pid82740 00:20:16.802 Removing: /var/run/dpdk/spdk_pid82895 00:20:16.802 Removing: /var/run/dpdk/spdk_pid83004 00:20:16.802 Removing: /var/run/dpdk/spdk_pid83711 00:20:16.802 Removing: /var/run/dpdk/spdk_pid83741 00:20:16.802 Removing: /var/run/dpdk/spdk_pid83776 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84030 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84066 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84096 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84568 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84578 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84815 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84941 00:20:16.802 Removing: /var/run/dpdk/spdk_pid84955 00:20:16.802 Clean 00:20:17.061 12:55:25 -- common/autotest_common.sh@1453 -- # return 0 00:20:17.061 12:55:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:17.061 12:55:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.061 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:20:17.061 12:55:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:17.061 12:55:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.061 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:20:17.061 12:55:25 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:17.061 12:55:25 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:17.061 12:55:25 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:17.061 12:55:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:17.061 12:55:25 -- spdk/autotest.sh@398 -- # hostname 00:20:17.061 12:55:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:17.321 geninfo: WARNING: invalid characters removed from testname! 00:20:39.326 12:55:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:41.860 12:55:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:44.395 12:55:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:46.929 12:55:55 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:49.463 12:55:57 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:51.368 12:55:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:53.903 12:56:02 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:53.903 12:56:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:53.903 12:56:02 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:53.903 12:56:02 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:53.903 12:56:02 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:53.903 12:56:02 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:53.903 + [[ -n 5274 ]] 00:20:53.903 + sudo kill 5274 00:20:53.912 [Pipeline] } 00:20:53.929 [Pipeline] // timeout 00:20:53.934 [Pipeline] } 00:20:53.948 [Pipeline] // stage 00:20:53.953 [Pipeline] } 00:20:53.969 [Pipeline] // catchError 00:20:53.978 [Pipeline] stage 00:20:53.981 [Pipeline] { (Stop VM) 00:20:53.993 [Pipeline] sh 00:20:54.273 + vagrant halt 00:20:57.576 ==> default: Halting domain... 00:21:04.197 [Pipeline] sh 00:21:04.478 + vagrant destroy -f 00:21:07.009 ==> default: Removing domain... 00:21:07.282 [Pipeline] sh 00:21:07.564 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:21:07.574 [Pipeline] } 00:21:07.588 [Pipeline] // stage 00:21:07.593 [Pipeline] } 00:21:07.606 [Pipeline] // dir 00:21:07.612 [Pipeline] } 00:21:07.626 [Pipeline] // wrap 00:21:07.631 [Pipeline] } 00:21:07.643 [Pipeline] // catchError 00:21:07.652 [Pipeline] stage 00:21:07.654 [Pipeline] { (Epilogue) 00:21:07.667 [Pipeline] sh 00:21:07.951 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:13.237 [Pipeline] catchError 00:21:13.239 [Pipeline] { 00:21:13.252 [Pipeline] sh 00:21:13.593 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:13.851 Artifacts sizes are good 00:21:13.861 [Pipeline] } 00:21:13.876 [Pipeline] // catchError 00:21:13.888 [Pipeline] archiveArtifacts 00:21:13.897 Archiving artifacts 00:21:14.024 [Pipeline] cleanWs 00:21:14.036 [WS-CLEANUP] Deleting project workspace... 00:21:14.036 [WS-CLEANUP] Deferred wipeout is used... 00:21:14.043 [WS-CLEANUP] done 00:21:14.045 [Pipeline] } 00:21:14.061 [Pipeline] // stage 00:21:14.067 [Pipeline] } 00:21:14.083 [Pipeline] // node 00:21:14.088 [Pipeline] End of Pipeline 00:21:14.123 Finished: SUCCESS